Build failed in Jenkins: kafka-trunk-jdk7 #841

2015-11-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Fix typo in sample Vagrantfile.local for AWS system tests

--
[...truncated 2808 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseT

[jira] [Updated] (KAFKA-2867) Missing synchronization and improperly handled InterruptException in WorkerSourceTask

2015-11-19 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2867:
-
Status: Patch Available  (was: Open)

> Missing synchronization and improperly handled InterruptException in 
> WorkerSourceTask
> -
>
> Key: KAFKA-2867
> URL: https://issues.apache.org/jira/browse/KAFKA-2867
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0, 0.9.1.0
>
>
> In WorkerSourceTask, finishSuccessfulFlush() is not synchronized. In one case 
> (if the flush didn't even have to be started), this is ok because we are 
> already in a synchronized block. However, the other case is outside the 
> synchronized block.
> The result of this was transient failures of the system test for clean 
> bouncing copycat nodes. The bug doesn't cause exceptions because 
> finishSuccessfulFlush() only does a swap of two maps and sets a flag to 
> false. However, because of the swapping of the two maps that maintain 
> outstanding messages, we could by chance also be starting to send a message. 
> If the message accidentally gets added to the backlog queue, then the 
> flushing flag is toggled, we can "lose" that message temporarily into the 
> backlog queue. Then we'll get a callback that will log an error because it 
> can't find a record of the acked message (which, if it ever appears, should 
> be considered a critical issue since it shouldn't be possible), and then on 
> the next commit, it'll be swapped *back into place*. On the subsequent 
> commit, the flush will never be able to complete because the message will be 
> in the outstanding list, but will already have been acked. This, in turn, 
> makes it impossible to commit offsets, and results in duplicate messages even 
> under clean bounces where we should be able to get exactly once delivery 
> assuming no network delays or other issues.
> As a result of seeing this error, it became apparent that handling of 
> WorkerSourceTaskThreads that do not complete quickly enough was not working 
> properly. The ShutdownableThread should get interrupted if it does not 
> complete quickly enough, but logs like this would happen:
> {quote}
> [2015-11-18 01:02:13,897] INFO Stopping task verifiable-source-0 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:13,897] INFO Starting graceful shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:13,897] DEBUG WorkerSourceTask{id=verifiable-source-0} 
> Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:17,901] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,897] INFO Forcing shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:18,897] ERROR Graceful stop of task 
> WorkerSourceTask{id=verifiable-source-0} failed. 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:18,897] ERROR Failed to flush 
> WorkerSourceTask{id=verifiable-source-0}, timed out while waiting for 
> producer to flush outstanding messages 
> (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:18,898] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,898] INFO Finished stopping tasks in preparation for 
> rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> {quote}
> Actions in the background thread performing the commit continue to occur 
> after it is supposedly interrupted. This is because InterruptedExceptions 
> during the flush were being ignored (some time ago they were not even 
> possible). Instead, any interruption by the main thread trying to shut down 
> the thread in preparation for a rebalance should be handled by failing the 
> commit operation and returning so the thread can exit cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2867: Fix missing WorkerSourceTask synch...

2015-11-19 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/566

KAFKA-2867: Fix missing WorkerSourceTask synchronization and handling of 
InterruptException.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2867-fix-source-sync-and-interrupt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/566.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #566


commit 3a610d4f24f9e102458f335d98c420a563c08aad
Author: Ewen Cheslack-Postava 
Date:   2015-11-19T21:37:17Z

KAFKA-2867: Fix missing WorkerSourceTask synchronization and handling of 
InterruptException.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2867) Missing synchronization and improperly handled InterruptException in WorkerSourceTask

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015243#comment-15015243
 ] 

ASF GitHub Bot commented on KAFKA-2867:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/566

KAFKA-2867: Fix missing WorkerSourceTask synchronization and handling of 
InterruptException.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2867-fix-source-sync-and-interrupt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/566.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #566


commit 3a610d4f24f9e102458f335d98c420a563c08aad
Author: Ewen Cheslack-Postava 
Date:   2015-11-19T21:37:17Z

KAFKA-2867: Fix missing WorkerSourceTask synchronization and handling of 
InterruptException.




> Missing synchronization and improperly handled InterruptException in 
> WorkerSourceTask
> -
>
> Key: KAFKA-2867
> URL: https://issues.apache.org/jira/browse/KAFKA-2867
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0, 0.9.1.0
>
>
> In WorkerSourceTask, finishSuccessfulFlush() is not synchronized. In one case 
> (if the flush didn't even have to be started), this is ok because we are 
> already in a synchronized block. However, the other case is outside the 
> synchronized block.
> The result of this was transient failures of the system test for clean 
> bouncing copycat nodes. The bug doesn't cause exceptions because 
> finishSuccessfulFlush() only does a swap of two maps and sets a flag to 
> false. However, because of the swapping of the two maps that maintain 
> outstanding messages, we could by chance also be starting to send a message. 
> If the message accidentally gets added to the backlog queue, then the 
> flushing flag is toggled, we can "lose" that message temporarily into the 
> backlog queue. Then we'll get a callback that will log an error because it 
> can't find a record of the acked message (which, if it ever appears, should 
> be considered a critical issue since it shouldn't be possible), and then on 
> the next commit, it'll be swapped *back into place*. On the subsequent 
> commit, the flush will never be able to complete because the message will be 
> in the outstanding list, but will already have been acked. This, in turn, 
> makes it impossible to commit offsets, and results in duplicate messages even 
> under clean bounces where we should be able to get exactly once delivery 
> assuming no network delays or other issues.
> As a result of seeing this error, it became apparent that handling of 
> WorkerSourceTaskThreads that do not complete quickly enough was not working 
> properly. The ShutdownableThread should get interrupted if it does not 
> complete quickly enough, but logs like this would happen:
> {quote}
> [2015-11-18 01:02:13,897] INFO Stopping task verifiable-source-0 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:13,897] INFO Starting graceful shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:13,897] DEBUG WorkerSourceTask{id=verifiable-source-0} 
> Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:17,901] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,897] INFO Forcing shutdown of thread 
> WorkerSourceTask-verifiable-source-0 
> (org.apache.kafka.connect.util.ShutdownableThread)
> [2015-11-18 01:02:18,897] ERROR Graceful stop of task 
> WorkerSourceTask{id=verifiable-source-0} failed. 
> (org.apache.kafka.connect.runtime.Worker)
> [2015-11-18 01:02:18,897] ERROR Failed to flush 
> WorkerSourceTask{id=verifiable-source-0}, timed out while waiting for 
> producer to flush outstanding messages 
> (org.apache.kafka.connect.runtime.WorkerSourceTask)
> [2015-11-18 01:02:18,898] DEBUG Submitting 1 entries to backing store 
> (org.apache.kafka.connect.storage.OffsetStorageWriter)
> [2015-11-18 01:02:18,898] INFO Finished stopping tasks in preparation for 
> rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> {quote}
> Actions in the background thread performing the commit continue to occur 
> after it is supposedly interrupted. This is because InterruptedExceptions 
> during the flush were being ignored (some time ago they were not even 
> possible). Inste

[jira] [Created] (KAFKA-2867) Missing synchronization and improperly handled InterruptException in WorkerSourceTask

2015-11-19 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2867:


 Summary: Missing synchronization and improperly handled 
InterruptException in WorkerSourceTask
 Key: KAFKA-2867
 URL: https://issues.apache.org/jira/browse/KAFKA-2867
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Blocker
 Fix For: 0.9.0.0, 0.9.1.0


In WorkerSourceTask, finishSuccessfulFlush() is not synchronized. In one case 
(if the flush didn't even have to be started), this is ok because we are 
already in a synchronized block. However, the other case is outside the 
synchronized block.

The result of this was transient failures of the system test for clean bouncing 
copycat nodes. The bug doesn't cause exceptions because finishSuccessfulFlush() 
only does a swap of two maps and sets a flag to false. However, because of the 
swapping of the two maps that maintain outstanding messages, we could by chance 
also be starting to send a message. If the message accidentally gets added to 
the backlog queue, then the flushing flag is toggled, we can "lose" that 
message temporarily into the backlog queue. Then we'll get a callback that will 
log an error because it can't find a record of the acked message (which, if it 
ever appears, should be considered a critical issue since it shouldn't be 
possible), and then on the next commit, it'll be swapped *back into place*. On 
the subsequent commit, the flush will never be able to complete because the 
message will be in the outstanding list, but will already have been acked. 
This, in turn, makes it impossible to commit offsets, and results in duplicate 
messages even under clean bounces where we should be able to get exactly once 
delivery assuming no network delays or other issues.

As a result of seeing this error, it became apparent that handling of 
WorkerSourceTaskThreads that do not complete quickly enough was not working 
properly. The ShutdownableThread should get interrupted if it does not complete 
quickly enough, but logs like this would happen:

{quote}
[2015-11-18 01:02:13,897] INFO Stopping task verifiable-source-0 
(org.apache.kafka.connect.runtime.Worker)
[2015-11-18 01:02:13,897] INFO Starting graceful shutdown of thread 
WorkerSourceTask-verifiable-source-0 
(org.apache.kafka.connect.util.ShutdownableThread)
[2015-11-18 01:02:13,897] DEBUG WorkerSourceTask{id=verifiable-source-0} 
Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2015-11-18 01:02:17,901] DEBUG Submitting 1 entries to backing store 
(org.apache.kafka.connect.storage.OffsetStorageWriter)
[2015-11-18 01:02:18,897] INFO Forcing shutdown of thread 
WorkerSourceTask-verifiable-source-0 
(org.apache.kafka.connect.util.ShutdownableThread)
[2015-11-18 01:02:18,897] ERROR Graceful stop of task 
WorkerSourceTask{id=verifiable-source-0} failed. 
(org.apache.kafka.connect.runtime.Worker)
[2015-11-18 01:02:18,897] ERROR Failed to flush 
WorkerSourceTask{id=verifiable-source-0}, timed out while waiting for producer 
to flush outstanding messages 
(org.apache.kafka.connect.runtime.WorkerSourceTask)
[2015-11-18 01:02:18,898] DEBUG Submitting 1 entries to backing store 
(org.apache.kafka.connect.storage.OffsetStorageWriter)
[2015-11-18 01:02:18,898] INFO Finished stopping tasks in preparation for 
rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
{quote}

Actions in the background thread performing the commit continue to occur after 
it is supposedly interrupted. This is because InterruptedExceptions during the 
flush were being ignored (some time ago they were not even possible). Instead, 
any interruption by the main thread trying to shut down the thread in 
preparation for a rebalance should be handled by failing the commit operation 
and returning so the thread can exit cleanly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015239#comment-15015239
 ] 

Ismael Juma commented on KAFKA-2863:


[~parth.brahmbhatt], do you mind if I take this?

> Authorizer should provide lifecycle (shutdown) methods
> --
>
> Key: KAFKA-2863
> URL: https://issues.apache.org/jira/browse/KAFKA-2863
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Joel Koshy
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.0.1
>
>
> Authorizer supports configure, but no shutdown. This would be useful for 
> non-trivial authorizers that need to do some cleanup (e.g., shutting down 
> threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #840

2015-11-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2820: Remove log threshold on appender in tools-log4j.properties

--
[...truncated 2763 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorRes

Build failed in Jenkins: kafka-trunk-jdk8 #171

2015-11-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2820: Remove log threshold on appender in tools-log4j.properties

[cshapi] MINOR: Fix typo in sample Vagrantfile.local for AWS system tests

--
[...truncated 103 lines...]
14 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes
:kafka-trunk-jdk8:clients:compileTestJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning

:kafka-trunk-jdk8:clients:processTestResources
:kafka-trunk-jdk8:clients:testClasses
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:393:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:274:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:195:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDe

[GitHub] kafka pull request: MINOR: Fix typo in sample Vagrantfile.local fo...

2015-11-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/565


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2820) System tests: log level is no longer propagating from service classes

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015140#comment-15015140
 ] 

ASF GitHub Bot commented on KAFKA-2820:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/556


> System tests: log level is no longer propagating from service classes
> -
>
> Key: KAFKA-2820
> URL: https://issues.apache.org/jira/browse/KAFKA-2820
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Many system test service classes specify a log level which should be 
> reflected in the log4j output of the corresponding kafka tools etc.
> However, at least some these log levels are no longer propagating, which 
> makes tests much harder to debug after they have run.
> E.g. KafkaService specifies a DEBUG log level, but all collected log output 
> from brokers is at INFO level or above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2820: Remove log threshold on appender i...

2015-11-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/556


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Fix typo in sample Vagrantfile.local fo...

2015-11-19 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/565

MINOR: Fix typo in sample Vagrantfile.local for AWS system tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka fix-aws-vagrantfile-local-example

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/565.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #565


commit 00e66afedb264e3890d34c8ca8869f083048e851
Author: Ewen Cheslack-Postava 
Date:   2015-11-20T03:12:02Z

MINOR: Fix typo in sample Vagrantfile.local for AWS system tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (KAFKA-2866) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reopened KAFKA-2866:


> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: KAFKA-2866
> URL: https://issues.apache.org/jira/browse/KAFKA-2866
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2866) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-2866.

Resolution: Not A Problem

> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: KAFKA-2866
> URL: https://issues.apache.org/jira/browse/KAFKA-2866
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2866) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2866:
---
Status: Patch Available  (was: Reopened)

> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: KAFKA-2866
> URL: https://issues.apache.org/jira/browse/KAFKA-2866
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #839

2015-11-19 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2861) system tests: grep logs for errors as part of validation

2015-11-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015036#comment-15015036
 ] 

Ewen Cheslack-Postava commented on KAFKA-2861:
--

[~geoffra] Right. The whitelist could work, as long as you always have some 
escape hatch to get out of this mode entirely if matching all the errors that 
*could* happen becomes to onerous. The question is whether you can make the 
filtering of expected errors low cost enough that people don't immediately jump 
to the escape hatch as soon as they see one error

> system tests: grep logs for errors as part of validation
> 
>
> Key: KAFKA-2861
> URL: https://issues.apache.org/jira/browse/KAFKA-2861
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>
> There may be errors going on under the hood that validation steps do not 
> detect, but which are logged at the ERROR level by brokers or clients. We are 
> more likely to catch subtle issues if we pattern match the server log for 
> ERROR as part of validation, and fail the test in this case.
> For example, in https://issues.apache.org/jira/browse/KAFKA-2813, the error 
> is transient, so our test may pass; however, we still want this issue to be 
> visible.
> To avoid spurious failures, we would probably want to be able to have a 
> whitelist of acceptable errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #170

2015-11-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2824: MiniKDC based tests don't run in VirtualBox

--
[...truncated 2825 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > tes

[jira] [Commented] (KAFKA-2866) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014846#comment-15014846
 ] 

ASF GitHub Bot commented on KAFKA-2866:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/564

KAFKA-2866: Bump up commons-collections version to 3.2.2 to address a…

… security flaw

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka commons

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/564.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #564


commit a5d803788da051c44ae9e298e4e68fe2862294a3
Author: Grant Henke 
Date:   2015-11-20T00:22:40Z

KAFKA-2866: Bump up commons-collections version to 3.2.2 to address a 
security flaw




> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: KAFKA-2866
> URL: https://issues.apache.org/jira/browse/KAFKA-2866
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2866: Bump up commons-collections versio...

2015-11-19 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/564

KAFKA-2866: Bump up commons-collections version to 3.2.2 to address a…

… security flaw

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka commons

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/564.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #564


commit a5d803788da051c44ae9e298e4e68fe2862294a3
Author: Grant Henke 
Date:   2015-11-20T00:22:40Z

KAFKA-2866: Bump up commons-collections version to 3.2.2 to address a 
security flaw




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2866) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014821#comment-15014821
 ] 

Grant Henke commented on KAFKA-2866:


This dependency is only in Kafka test code, so the fix is not critical.

> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: KAFKA-2866
> URL: https://issues.apache.org/jira/browse/KAFKA-2866
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2866) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2866:
---
Priority: Major  (was: Blocker)

> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: KAFKA-2866
> URL: https://issues.apache.org/jira/browse/KAFKA-2866
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2866) Bump up commons-collections version to 3.2.2 to address a security flaw

2015-11-19 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2866:
--

 Summary: Bump up commons-collections version to 3.2.2 to address a 
security flaw
 Key: KAFKA-2866
 URL: https://issues.apache.org/jira/browse/KAFKA-2866
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Grant Henke
Assignee: Grant Henke
Priority: Blocker


Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
vulnerability. There are many other open source projects use 
commons-collections and are also affected.

Please see 
http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
 for the discovery of the vulnerability.

https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion thread 
of the fix.

https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
 The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2824) MiniKDC based tests don't run in VirtualBox

2015-11-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2824.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 520
[https://github.com/apache/kafka/pull/520]

> MiniKDC based tests don't run in VirtualBox
> ---
>
> Key: KAFKA-2824
> URL: https://issues.apache.org/jira/browse/KAFKA-2824
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
> Fix For: 0.9.1.0
>
>
> When running system tests in virtualbox the miniKDC server isn't reachable. 
> Works fine in EC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2824) MiniKDC based tests don't run in VirtualBox

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014798#comment-15014798
 ] 

ASF GitHub Bot commented on KAFKA-2824:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/520


> MiniKDC based tests don't run in VirtualBox
> ---
>
> Key: KAFKA-2824
> URL: https://issues.apache.org/jira/browse/KAFKA-2824
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
> Fix For: 0.9.1.0
>
>
> When running system tests in virtualbox the miniKDC server isn't reachable. 
> Works fine in EC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2824: MiniKDC based tests don't run in V...

2015-11-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/520


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2865) Improve Request API Error Code Documention

2015-11-19 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2865:
--

 Summary: Improve Request API Error Code Documention
 Key: KAFKA-2865
 URL: https://issues.apache.org/jira/browse/KAFKA-2865
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


Current protocol documentation 
(https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-ErrorCodes)
  contains a list of all the error codes possible through the Kafka request 
API, but this is getting unwieldy to manage since error codes span different 
request types and occasionally have slightly different semantics. It would be 
nice to list the error codes for each API separately with request-specific 
descriptions as well as suggested handling (when it makes sense). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2642: Run replication tests with SSL and...

2015-11-19 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/563

KAFKA-2642: Run replication tests with SSL and SASL clients

For SSL and SASL replication tests, set security protocol for clients as 
well.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2642

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/563.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #563






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2642) Run replication tests in ducktape with SSL for clients

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014765#comment-15014765
 ] 

ASF GitHub Bot commented on KAFKA-2642:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/563

KAFKA-2642: Run replication tests with SSL and SASL clients

For SSL and SASL replication tests, set security protocol for clients as 
well.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2642

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/563.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #563






> Run replication tests in ducktape with SSL for clients
> --
>
> Key: KAFKA-2642
> URL: https://issues.apache.org/jira/browse/KAFKA-2642
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Under KAFKA-2581, replication tests were parametrized to run with SSL for 
> interbroker communication, but not for clients. When KAFKA-2603 is committed, 
> the tests should be able to use SSL for clients as well,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2597) Add Eclipse directories to .gitignore

2015-11-19 Thread Jorge Quilcate (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014737#comment-15014737
 ] 

Jorge Quilcate commented on KAFKA-2597:
---

Fixed: https://github.com/apache/kafka/pull/562

> Add Eclipse directories to .gitignore
> -
>
> Key: KAFKA-2597
> URL: https://issues.apache.org/jira/browse/KAFKA-2597
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Trivial
> Fix For: 0.9.0.0
>
>
> Add to {{.gitignore}} the Eclipse IDE directories {{.metadata}} and 
> {{.recommenders}}. These store state of the IDE's workspace, and should not 
> be checked in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Adding .gitignore files to filter Eclipse IDE ...

2015-11-19 Thread jeqo
Github user jeqo closed the pull request at:

https://github.com/apache/kafka/pull/562


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2861) system tests: grep logs for errors as part of validation

2015-11-19 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014665#comment-15014665
 ] 

Geoff Anderson commented on KAFKA-2861:
---

[~ewencp] Good point, I'm not sure if this is workable generically.

This falls into the category of: how do we surface things or events that seem 
to be bad, but not bad in some expected way? How do we increase the probability 
that we'll catch anomalous behavior without creating false failures?



> system tests: grep logs for errors as part of validation
> 
>
> Key: KAFKA-2861
> URL: https://issues.apache.org/jira/browse/KAFKA-2861
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>
> There may be errors going on under the hood that validation steps do not 
> detect, but which are logged at the ERROR level by brokers or clients. We are 
> more likely to catch subtle issues if we pattern match the server log for 
> ERROR as part of validation, and fail the test in this case.
> For example, in https://issues.apache.org/jira/browse/KAFKA-2813, the error 
> is transient, so our test may pass; however, we still want this issue to be 
> visible.
> To avoid spurious failures, we would probably want to be able to have a 
> whitelist of acceptable errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2864) Bad zookeeper host causes broker to shutdown uncleanly and stall producers

2015-11-19 Thread Mahdi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahdi updated KAFKA-2864:
-
Attachment: kafka.log

> Bad zookeeper host causes broker to shutdown uncleanly and stall producers
> --
>
> Key: KAFKA-2864
> URL: https://issues.apache.org/jira/browse/KAFKA-2864
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.8.2.1
>Reporter: Mahdi
>Priority: Critical
> Attachments: kafka.log
>
>
> We are using kafka 0.8.2.1 and we noticed that kafka/zookeeper-client were 
> not able to gracefully handle a non existing zookeeper instance. This caused 
> one of our brokers to get stuck during a self-inflicted shutdown and that 
> seemed to impact the partitions for which the broker was a leader even though 
> we had two other replicas.
> Here is a timeline of what happened (shortened for brevity, I'll attach log 
> snippets):
> We have a 7 node zookeeper cluster. Two of our nodes were decommissioned and 
> their dns records removed (zookeeper15 and zookeeper16). The decommissioning 
> happened about two weeks earlier. We noticed the following in the logs
> - Opening socket connection to server ip-10-0-0-1.ec2.internal/10.0.0.1:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> - Client session timed out, have not heard from server in 858ms for sessionid 
> 0x1250c5c0f1f5001c, closing socket connection and attempting reconnect
> - Opening socket connection to server ip-10.0.0.2.ec2.internal/10.0.0.2:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> - zookeeper state changed (Disconnected)
> - Client session timed out, have not heard from server in 2677ms for 
> sessionid 0x1250c5c0f1f5001c, closing socket connection and attempting 
> reconnect
> - Opening socket connection to server ip-10.0.0.3.ec2.internal/10.0.0.3:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> - Socket connection established to ip-10.0.0.3.ec2.internal/10.0.0.3:2181, 
> initiating session
> - zookeeper state changed (Expired)
> - Initiating client connection, 
> connectString=zookeeper21.example.com:2181,zookeeper19.example.com:2181,zookeeper22.example.com:2181,zookeeper18.example.com:2181,zookeeper20.example.com:2181,zookeeper16.example.com:2181,zookeeper15.example.com:2181/foo/kafka/central
>  sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@3bbc39f8
> - Unable to reconnect to ZooKeeper service, session 0x1250c5c0f1f5001c has 
> expired, closing socket connection
> - Unable to re-establish connection. Notifying consumer of the following 
> exception:
> org.I0Itec.zkclient.exception.ZkException: Unable to connect to 
> zookeeper21.example.com:2181,zookeeper19.example.com:2181,zookeeper22.example.com:2181,zookeeper18.example.com:2181,zookeeper20.example.com:2181,zookeeper16.example.com:2181,zookeeper15.example.com:2181/foo/kafka/central
> at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:69)
> at org.I0Itec.zkclient.ZkClient.reconnect(ZkClient.java:1176)
> at org.I0Itec.zkclient.ZkClient.processStateChanged(ZkClient.java:649)
> at org.I0Itec.zkclient.ZkClient.process(ZkClient.java:560)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> Caused by: java.net.UnknownHostException: zookeeper16.example.com: unknown 
> error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:67)
> ... 5 more
> That seems to have caused the following:
>  [main-EventThread] [org.apache.zookeeper.ClientCnxn ]: EventThread shut 
> down
> Which in turn caused kafka to shut itself down
> [Thread-2] [kafka.server.KafkaServer]: [Kafka Server 13], 
> shutting down
> [Thread-2] [kafka.server.KafkaServer]: [Kafka Server 13], 
> Starting controlled shutdown
> However, the shutdown didn't go as expected apparently due to an NPE in the 
> zk client
> 2015-11-12T12:03:40.101Z WARN  [Thread-2   ] 
> [kafka.utils.Utils$  

[jira] [Created] (KAFKA-2864) Bad zookeeper host causes broker to shutdown uncleanly and stall producers

2015-11-19 Thread Mahdi (JIRA)
Mahdi created KAFKA-2864:


 Summary: Bad zookeeper host causes broker to shutdown uncleanly 
and stall producers
 Key: KAFKA-2864
 URL: https://issues.apache.org/jira/browse/KAFKA-2864
 Project: Kafka
  Issue Type: Bug
  Components: zkclient
Affects Versions: 0.8.2.1
Reporter: Mahdi
Priority: Critical


We are using kafka 0.8.2.1 and we noticed that kafka/zookeeper-client were not 
able to gracefully handle a non existing zookeeper instance. This caused one of 
our brokers to get stuck during a self-inflicted shutdown and that seemed to 
impact the partitions for which the broker was a leader even though we had two 
other replicas.

Here is a timeline of what happened (shortened for brevity, I'll attach log 
snippets):

We have a 7 node zookeeper cluster. Two of our nodes were decommissioned and 
their dns records removed (zookeeper15 and zookeeper16). The decommissioning 
happened about two weeks earlier. We noticed the following in the logs

- Opening socket connection to server ip-10-0-0-1.ec2.internal/10.0.0.1:2181. 
Will not attempt to authenticate using SASL (unknown error)
- Client session timed out, have not heard from server in 858ms for sessionid 
0x1250c5c0f1f5001c, closing socket connection and attempting reconnect
- Opening socket connection to server ip-10.0.0.2.ec2.internal/10.0.0.2:2181. 
Will not attempt to authenticate using SASL (unknown error)
- zookeeper state changed (Disconnected)
- Client session timed out, have not heard from server in 2677ms for sessionid 
0x1250c5c0f1f5001c, closing socket connection and attempting reconnect
- Opening socket connection to server ip-10.0.0.3.ec2.internal/10.0.0.3:2181. 
Will not attempt to authenticate using SASL (unknown error)
- Socket connection established to ip-10.0.0.3.ec2.internal/10.0.0.3:2181, 
initiating session
- zookeeper state changed (Expired)
- Initiating client connection, 
connectString=zookeeper21.example.com:2181,zookeeper19.example.com:2181,zookeeper22.example.com:2181,zookeeper18.example.com:2181,zookeeper20.example.com:2181,zookeeper16.example.com:2181,zookeeper15.example.com:2181/foo/kafka/central
 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@3bbc39f8
- Unable to reconnect to ZooKeeper service, session 0x1250c5c0f1f5001c has 
expired, closing socket connection
- Unable to re-establish connection. Notifying consumer of the following 
exception:
org.I0Itec.zkclient.exception.ZkException: Unable to connect to 
zookeeper21.example.com:2181,zookeeper19.example.com:2181,zookeeper22.example.com:2181,zookeeper18.example.com:2181,zookeeper20.example.com:2181,zookeeper16.example.com:2181,zookeeper15.example.com:2181/foo/kafka/central
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:69)
at org.I0Itec.zkclient.ZkClient.reconnect(ZkClient.java:1176)
at org.I0Itec.zkclient.ZkClient.processStateChanged(ZkClient.java:649)
at org.I0Itec.zkclient.ZkClient.process(ZkClient.java:560)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Caused by: java.net.UnknownHostException: zookeeper16.example.com: unknown error
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:67)
... 5 more


That seems to have caused the following:
 [main-EventThread] [org.apache.zookeeper.ClientCnxn ]: EventThread shut 
down

Which in turn caused kafka to shut itself down
[Thread-2] [kafka.server.KafkaServer]: [Kafka Server 13], shutting 
down
[Thread-2] [kafka.server.KafkaServer]: [Kafka Server 13], Starting 
controlled shutdown

However, the shutdown didn't go as expected apparently due to an NPE in the zk 
client

2015-11-12T12:03:40.101Z WARN  [Thread-2   ] 
[kafka.utils.Utils$  ]:
java.lang.NullPointerException
at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:117)
at org.I0Itec.zkclient.ZkClient$10.call(ZkClient.java:992)
at org.I0Itec.zkclient.ZkClient$10.call(ZkClient.java:988)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:883)
at org.I0Itec.zkclient.ZkClient.readDa

[jira] [Commented] (KAFKA-2627) Kafka Heap Size increase impact performance badly

2015-11-19 Thread Venkat Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014522#comment-15014522
 ] 

Venkat Ramachandran commented on KAFKA-2627:


I don't think this is due to a defect in the Kafka code. You should try to 
reduce the heap to the minimum required and also use g1gc garbage collector.

> Kafka Heap Size increase impact performance badly
> -
>
> Key: KAFKA-2627
> URL: https://issues.apache.org/jira/browse/KAFKA-2627
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: CentOS Linux release 7.0.1406 (Core)
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/";
> BUG_REPORT_URL="https://bugs.centos.org/";
> CentOS Linux release 7.0.1406 (Core)
> CentOS Linux release 7.0.1406 (Core)
>Reporter: Mihir Pandya
>
> Initial Kafka server was configured with 
> KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
> As we have high resource to utilize, we changed it to below value 
> KAFKA_HEAP_OPTS="-Xmx16G -Xms8G"
> Change highly impacted Kafka & Zookeeper, we started getting various issue at 
> both end.
> We were not getting all replica in ISR. And it was an issue with Leader 
> Selection which in-turn throwing Socket Connection Error.
> To debug, we checked kafaServer-gc.log, we were getting GC(Allocation 
> Failure) though we have lot more Memory is avalable.
> == GC Error ===
> 2015-10-08T09:43:08.796+: 4.651: [GC (Allocation Failure) 4.651: [ParNew: 
> 272640K->7265K(306688K), 0.0277514 secs] 272640K->7265K(1014528K), 0.0281243 
> secs] [Times: user=0.03 sys=0.05, real=0.03 secs]
> 2015-10-08T09:43:11.317+: 7.172: [GC (Allocation Failure) 7.172: [ParNew: 
> 279905K->3793K(306688K), 0.0157898 secs] 279905K->3793K(1014528K), 0.0159913 
> secs] [Times: user=0.03 sys=0.01, real=0.02 secs]
> 2015-10-08T09:43:13.522+: 9.377: [GC (Allocation Failure) 9.377: [ParNew: 
> 276433K->2827K(306688K), 0.0064236 secs] 276433K->2827K(1014528K), 0.0066834 
> secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
> 2015-10-08T09:43:15.518+: 11.372: [GC (Allocation Failure) 11.373: 
> [ParNew: 275467K->3090K(306688K), 0.0055454 secs] 275467K->3090K(1014528K), 
> 0.0057979 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 2015-10-08T09:43:17.558+: 13.412: [GC (Allocation Failure) 13.412: 
> [ParNew: 275730K->3346K(306688K), 0.0053757 secs] 275730K->3346K(1014528K), 
> 0.0055039 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 
> = Other Kafka Errors =
> [2015-10-01 15:35:19,039] INFO conflict in /brokers/ids/3 data: 
> {"jmx_port":-1,"timestamp":"1443709506024","host":"","version":1,"port":9092}
>  stored data: 
> {"jmx_port":-1,"timestamp":"1443702430352","host":"","version":1,"port":9092}
>  (kafka.utils.ZkUtils$)
> [2015-10-01 15:35:19,042] INFO I wrote this conflicted ephemeral node 
> [{"jmx_port":-1,"timestamp":"1443709506024","host":"","version":1,"port":9092}]
>  at /brokers/ids/3 a while back in a different session, hence I will backoff 
> for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> [2015-10-01 15:23:12,378] INFO Closing socket connection to /172.28.72.162. 
> (kafka.network.Processor)
> [2015-10-01 15:23:12,378] INFO Closing socket connection to /172.28.72.162. 
> (kafka.network.Processor)
> [2015-10-01 15:21:53,831] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:21:53,834] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:21:53,835] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:21:53,837] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:20:36,210] WARN [ReplicaFetcherThread-0-2], Error in fetch 
> Name: FetchRequest; Version: 0; CorrelationId: 9; ClientId: 
> ReplicaFetcherThread-0-2; ReplicaId: 3; MaxWait: 500 ms; MinBytes: 1 bytes; 
> RequestInfo: [__consumer_offsets,17] -> 
> PartitionFetchInfo(0,1048576),[__consumer_offsets,23] -> 
> PartitionFetchInfo(0,1048576),[__consumer_offsets,29] -> 
> PartitionFetchInfo(0,

[GitHub] kafka pull request: Adding .gitignore files to filter Eclipse IDE ...

2015-11-19 Thread jeqo
GitHub user jeqo opened a pull request:

https://github.com/apache/kafka/pull/562

Adding .gitignore files to filter Eclipse IDE bin directories

In  the case of "core" project, .gitignore includes .cache-main and 
.cache-tests from gradlew jar command.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeqo/kafka KAFKA-2597

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/562.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #562


commit 1aa6e325f161f1d6c2caa5189712ddb886b3d24b
Author: Jorge Quilcate 
Date:   2015-11-19T22:08:30Z

Adding .gitignore files to filter Eclipse IDE bin directories




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2597) Add Eclipse directories to .gitignore

2015-11-19 Thread Jorge Quilcate (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014452#comment-15014452
 ] 

Jorge Quilcate commented on KAFKA-2597:
---

'./gradlew eclipse' command is creating 'bin' directories and '.gitignore' 
files in each project (clients, connect/file, connect/json, connect/runtime, 
core, log4j-appender, streams, and tools) filtering this directory. 
Also, when 'gradlew jar' is executed, it generates these files: .cache-main 
.cache-tests

If we should filter these files and directories, should we filter in the root 
.gitignore or in each project's .gitignore file?

Environment: Eclipse Mars, Scala 2.11, Gradle 2.9, Ubuntu 15.

> Add Eclipse directories to .gitignore
> -
>
> Key: KAFKA-2597
> URL: https://issues.apache.org/jira/browse/KAFKA-2597
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Trivial
> Fix For: 0.9.0.0
>
>
> Add to {{.gitignore}} the Eclipse IDE directories {{.metadata}} and 
> {{.recommenders}}. These store state of the IDE's workspace, and should not 
> be checked in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.9.0.0 Candiate 3

2015-11-19 Thread Ewen Cheslack-Postava
FYI, I found a blocker for Kafka Connect
https://issues.apache.org/jira/browse/KAFKA-2859. The fix is already in,
but we'll need to do one more round of RC.

-Ewen

On Thu, Nov 19, 2015 at 11:45 AM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> +1 (non-binding)
>
> We integrated this (source rather than binary) into our build yesterday and
> it has been running for a day in our test clusters with light load
> throughout and occasional heavy load. We are running on IBM JRE with SSL
> clients.
>
> On Thu, Nov 19, 2015 at 6:55 PM, Guozhang Wang  wrote:
>
> > +1 (binding).
> >
> > Verified quick start, console clients and topic/consumer tools.
> >
> > Guozhang
> >
> > On Thu, Nov 19, 2015 at 10:45 AM, Ismael Juma  wrote:
> >
> > > +1 (non-binding).
> > >
> > > Verified source and binary artifacts, ran ./gradlew testAll with JDK
> > 7u80,
> > > quick start on source artifact and Scala 2.11 binary artifact.
> > >
> > > Ismael
> > >
> > > On Wed, Nov 18, 2015 at 5:57 AM, Jun Rao  wrote:
> > >
> > > > This is the third candidate for release of Apache Kafka 0.9.0.0.
> This a
> > > > major release that includes (1) authentication (through SSL and SASL)
> > and
> > > > authorization, (2) a new java consumer, (3) a Kafka connect framework
> > for
> > > > data ingestion and egression, and (4) quotas. Since this is a major
> > > > release, we will give people a bit more time for trying this out.
> > > >
> > > > Release Notes for the 0.9.0.0 release
> > > >
> > > >
> > >
> >
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/RELEASE_NOTES.html
> > > >
> > > > *** Please download, test and vote by Friday, Nov. 20, 10pm PT
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > > > and sha2 (SHA256) checksum.
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/
> > > >
> > > > * Maven artifacts to be voted upon prior to release:
> > > > https://repository.apache.org/content/groups/staging/
> > > >
> > > > * scala-doc
> > > > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/scaladoc/
> > > >
> > > > * java-doc
> > > > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/javadoc/
> > > >
> > > > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> > > >
> > > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b8872868168f7b1172c08879f4b699db1ccb5ab7
> > > >
> > > > * Documentation
> > > > http://kafka.apache.org/090/documentation.html
> > > >
> > > > /***
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> Thank you...
>
> Regards,
>
> Rajini
>



-- 
Thanks,
Ewen


Re: [VOTE] 0.9.0.0 Candiate 3

2015-11-19 Thread Rajini Sivaram
+1 (non-binding)

We integrated this (source rather than binary) into our build yesterday and
it has been running for a day in our test clusters with light load
throughout and occasional heavy load. We are running on IBM JRE with SSL
clients.

On Thu, Nov 19, 2015 at 6:55 PM, Guozhang Wang  wrote:

> +1 (binding).
>
> Verified quick start, console clients and topic/consumer tools.
>
> Guozhang
>
> On Thu, Nov 19, 2015 at 10:45 AM, Ismael Juma  wrote:
>
> > +1 (non-binding).
> >
> > Verified source and binary artifacts, ran ./gradlew testAll with JDK
> 7u80,
> > quick start on source artifact and Scala 2.11 binary artifact.
> >
> > Ismael
> >
> > On Wed, Nov 18, 2015 at 5:57 AM, Jun Rao  wrote:
> >
> > > This is the third candidate for release of Apache Kafka 0.9.0.0. This a
> > > major release that includes (1) authentication (through SSL and SASL)
> and
> > > authorization, (2) a new java consumer, (3) a Kafka connect framework
> for
> > > data ingestion and egression, and (4) quotas. Since this is a major
> > > release, we will give people a bit more time for trying this out.
> > >
> > > Release Notes for the 0.9.0.0 release
> > >
> > >
> >
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Friday, Nov. 20, 10pm PT
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > > and sha2 (SHA256) checksum.
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/
> > >
> > > * Maven artifacts to be voted upon prior to release:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * scala-doc
> > > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/scaladoc/
> > >
> > > * java-doc
> > > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/javadoc/
> > >
> > > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b8872868168f7b1172c08879f4b699db1ccb5ab7
> > >
> > > * Documentation
> > > http://kafka.apache.org/090/documentation.html
> > >
> > > /***
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> >
>
>
>
> --
> -- Guozhang
>



-- 
Thank you...

Regards,

Rajini


[jira] [Created] (KAFKA-2863) Authorizer should provide lifecycle (shutdown) methods

2015-11-19 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-2863:
-

 Summary: Authorizer should provide lifecycle (shutdown) methods
 Key: KAFKA-2863
 URL: https://issues.apache.org/jira/browse/KAFKA-2863
 Project: Kafka
  Issue Type: Improvement
  Components: security
Reporter: Joel Koshy
Assignee: Parth Brahmbhatt
 Fix For: 0.9.0.1


Authorizer supports configure, but no shutdown. This would be useful for 
non-trivial authorizers that need to do some cleanup (e.g., shutting down 
threadpools and such) on broker shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.9.0.0 Candiate 3

2015-11-19 Thread Guozhang Wang
+1 (binding).

Verified quick start, console clients and topic/consumer tools.

Guozhang

On Thu, Nov 19, 2015 at 10:45 AM, Ismael Juma  wrote:

> +1 (non-binding).
>
> Verified source and binary artifacts, ran ./gradlew testAll with JDK 7u80,
> quick start on source artifact and Scala 2.11 binary artifact.
>
> Ismael
>
> On Wed, Nov 18, 2015 at 5:57 AM, Jun Rao  wrote:
>
> > This is the third candidate for release of Apache Kafka 0.9.0.0. This a
> > major release that includes (1) authentication (through SSL and SASL) and
> > authorization, (2) a new java consumer, (3) a Kafka connect framework for
> > data ingestion and egression, and (4) quotas. Since this is a major
> > release, we will give people a bit more time for trying this out.
> >
> > Release Notes for the 0.9.0.0 release
> >
> >
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Friday, Nov. 20, 10pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > and sha2 (SHA256) checksum.
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/
> >
> > * Maven artifacts to be voted upon prior to release:
> > https://repository.apache.org/content/groups/staging/
> >
> > * scala-doc
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/scaladoc/
> >
> > * java-doc
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/javadoc/
> >
> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b8872868168f7b1172c08879f4b699db1ccb5ab7
> >
> > * Documentation
> > http://kafka.apache.org/090/documentation.html
> >
> > /***
> >
> > Thanks,
> >
> > Jun
> >
>



-- 
-- Guozhang


Re: [VOTE] 0.9.0.0 Candiate 3

2015-11-19 Thread Ismael Juma
+1 (non-binding).

Verified source and binary artifacts, ran ./gradlew testAll with JDK 7u80,
quick start on source artifact and Scala 2.11 binary artifact.

Ismael

On Wed, Nov 18, 2015 at 5:57 AM, Jun Rao  wrote:

> This is the third candidate for release of Apache Kafka 0.9.0.0. This a
> major release that includes (1) authentication (through SSL and SASL) and
> authorization, (2) a new java consumer, (3) a Kafka connect framework for
> data ingestion and egression, and (4) quotas. Since this is a major
> release, we will give people a bit more time for trying this out.
>
> Release Notes for the 0.9.0.0 release
>
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, Nov. 20, 10pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/
>
> * scala-doc
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/scaladoc/
>
> * java-doc
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate3/javadoc/
>
> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b8872868168f7b1172c08879f4b699db1ccb5ab7
>
> * Documentation
> http://kafka.apache.org/090/documentation.html
>
> /***
>
> Thanks,
>
> Jun
>


[jira] [Commented] (KAFKA-2862) Incorrect help description for MirrorMaker's message.handler.args

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014093#comment-15014093
 ] 

ASF GitHub Bot commented on KAFKA-2862:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/561

KAFKA-2862: Fix MirrorMaker's message.handler.args description



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2862

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/561.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #561


commit 4dd3cc1a2658f0eb52602334139ec6f81358f260
Author: Ashish Singh 
Date:   2015-11-19T18:27:11Z

KAFKA-2862: Fix MirrorMaker's message.handler.args description




> Incorrect help description for MirrorMaker's message.handler.args
> -
>
> Key: KAFKA-2862
> URL: https://issues.apache.org/jira/browse/KAFKA-2862
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Help description for MirrorMaker's message.handler.args is not correct. Fix 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2862: Fix MirrorMaker's message.handler....

2015-11-19 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/561

KAFKA-2862: Fix MirrorMaker's message.handler.args description



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2862

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/561.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #561


commit 4dd3cc1a2658f0eb52602334139ec6f81358f260
Author: Ashish Singh 
Date:   2015-11-19T18:27:11Z

KAFKA-2862: Fix MirrorMaker's message.handler.args description




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2862) Incorrect help description for MirrorMaker's message.handler.args

2015-11-19 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2862:
--
Summary: Incorrect help description for MirrorMaker's message.handler.args  
(was: Incorrect help description for MirrorMaker's message.handler)

> Incorrect help description for MirrorMaker's message.handler.args
> -
>
> Key: KAFKA-2862
> URL: https://issues.apache.org/jira/browse/KAFKA-2862
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Help description for MirrorMaker's message.handler is not correct. Fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2862) Incorrect help description for MirrorMaker's message.handler.args

2015-11-19 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2862:
--
Description: Help description for MirrorMaker's message.handler.args is not 
correct. Fix it.  (was: Help description for MirrorMaker's message.handler is 
not correct. Fix it.)

> Incorrect help description for MirrorMaker's message.handler.args
> -
>
> Key: KAFKA-2862
> URL: https://issues.apache.org/jira/browse/KAFKA-2862
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Help description for MirrorMaker's message.handler.args is not correct. Fix 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2862) Incorrect help description for MirrorMaker's message.handler

2015-11-19 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2862:
-

 Summary: Incorrect help description for MirrorMaker's 
message.handler
 Key: KAFKA-2862
 URL: https://issues.apache.org/jira/browse/KAFKA-2862
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh


Help description for MirrorMaker's message.handler is not correct. Fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #838

2015-11-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Documentation improvements

--
[...truncated 1485 lines...]

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.ByteBufferMessageSetTest > testOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED
:test_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:compileTestJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processTestResources UP-TO-DATE
:kafka-trunk-jdk7:clients:testClasses UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,
   

Build failed in Jenkins: kafka_0.9.0_jdk7 #34

2015-11-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Documentation improvements

--
[...truncated 741 lines...]
kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.server.KafkaConfigTest > testAdvertiseConfigured PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeBothMsAndHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionValid PASSED

kafka.server.KafkaConfigTest > testSpecificProperties PASSED

kafka.server.KafkaConfigTest > testDefaultCompressionType PASSED

kafka.server.KafkaConfigTest > testDuplicateListeners PASSED

kafka.server.KafkaConfigTest > testLogRetentionUnlimited PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testAdvertiseDefaults PASSED

kafka.server.KafkaConfigTest > testBadListenerProtocol PASSED

kafka.server.KafkaConfigTest > testListenerDefaults PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndHoursProvided 
PASSED

kafka.server.KafkaConfigTest > testUncleanElectionDisabled PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testFromPropsInvalid PASSED

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.DynamicConfigChangeTest > testProcessNotification PASSED

kafka.server.DynamicConfigChangeTest > testClientQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.SimpleFetchTest > testReadFromLog PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> Process 'Gradle Test Executor 2' finished with non-zero exit value 1

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at 
org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at 
org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53)
at 
org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecut

Build failed in Jenkins: kafka-trunk-jdk8 #169

2015-11-19 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Documentation improvements

--
[...truncated 4520 lines...]
org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
org.junit.ComparisonFailure: expected: but was:
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.con

[jira] [Commented] (KAFKA-1451) Broker stuck due to leader election race

2015-11-19 Thread Zach Cox (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014006#comment-15014006
 ] 

Zach Cox commented on KAFKA-1451:
-

[~fpj] Yes we saw the "I wrote this conflicted ephemeral node" error messages, 
we saw lots of partitions in/out of ISRs and a lot of this too:

{code}
[2015-11-19 01:05:51,685] INFO Opening socket connection to server 
ip-10-10-1-35.ec2.internal/10.10.1.35:2181. Will not attempt to authenticate 
using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-19 01:05:51,685] INFO Socket connection established to 
ip-10-10-1-35.ec2.internal/10.10.1.35:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2015-11-19 01:05:51,687] INFO Unable to reconnect to ZooKeeper service, 
session 0x54a0e5799a8195d has expired, closing socket connection 
(org.apache.zookeeper.ClientCnxn)
[2015-11-19 01:05:51,687] INFO zookeeper state changed (Expired) 
(org.I0Itec.zkclient.ZkClient)
[2015-11-19 01:05:51,687] INFO Initiating client connection, 
connectString=zookeeper1.production.redacted.com:2181,zookeeper2.production.redacted.com:2181,zookeeper3.production.redacted.com:2181/kafka
 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@ace1333 
(org.apache.zookeeper.ZooKeeper)
[2015-11-19 01:05:51,701] INFO EventThread shut down 
(org.apache.zookeeper.ClientCnxn)
[2015-11-19 01:05:51,701] ERROR Error handling event ZkEvent[New session event 
sent to kafka.controller.KafkaController$SessionExpirationListener@2261adb8] 
(org.I0Itec.zkclient.ZkEventThread)
java.lang.IllegalStateException: Kafka scheduler has not been started
  at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
  at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
  at 
kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
  at 
kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1108)
  at 
kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
  at 
kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1107)
  at kafka.utils.Utils$.inLock(Utils.scala:535)
  at 
kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1107)
  at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
  at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
[2015-11-19 01:05:51,701] INFO re-registering broker info in ZK for broker 3 
(kafka.server.KafkaHealthcheck)
[2015-11-19 01:05:51,701] INFO Opening socket connection to server 
ip-10-10-1-104.ec2.internal/10.10.1.104:2181. Will not attempt to authenticate 
using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-19 01:05:51,702] INFO Socket connection established to 
ip-10-10-1-104.ec2.internal/10.10.1.104:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2015-11-19 01:05:51,713] INFO Session establishment complete on server 
ip-10-10-1-104.ec2.internal/10.10.1.104:2181, sessionid = 0x64a0e57972a1a85, 
negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2015-11-19 01:05:51,713] INFO zookeeper state changed (SyncConnected) 
(org.I0Itec.zkclient.ZkClient)
[2015-11-19 01:05:51,718] INFO Registered broker 3 at path /brokers/ids/3 with 
address mesos-slave3.production.redacted.com:9092. (kafka.utils.ZkUtils$)
[2015-11-19 01:05:51,718] INFO done re-registering broker 
(kafka.server.KafkaHealthcheck)
[2015-11-19 01:05:51,718] INFO Subscribing to /brokers/topics path to watch for 
new topics (kafka.server.KafkaHealthcheck)
[2015-11-19 01:05:51,721] INFO New leader is 1 
(kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
{code}

> Broker stuck due to leader election race 
> -
>
> Key: KAFKA-1451
> URL: https://issues.apache.org/jira/browse/KAFKA-1451
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Maciek Makowski
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1451.patch, KAFKA-1451_2014-07-28_20:27:32.patch, 
> KAFKA-1451_2014-07-29_10:13:23.patch
>
>
> h3. Symptoms
> The broker does not become available due to being stuck in an infinite loop 
> while electing leader. This can be recognised by the following line being 
> repeatedly written to server.log:
> {code}
> [2014-05-14 04:35:09,187] INFO I wrote this conflicted ephemeral node 
> [{"version":1,"brokerid":1,"timestamp":"1400060079108"}] at /controller a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> {code}
> h3. Steps t

[jira] [Commented] (KAFKA-1451) Broker stuck due to leader election race

2015-11-19 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013983#comment-15013983
 ] 

Flavio Junqueira commented on KAFKA-1451:
-

[~zcox] if you observed messages like the ones in this comment above

https://issues.apache.org/jira/browse/KAFKA-1451?focusedCommentId=14593515&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14593515

then I suspect this will be resolved with the fix of KAFKA-1387, which will be 
available in 0.9.

> Broker stuck due to leader election race 
> -
>
> Key: KAFKA-1451
> URL: https://issues.apache.org/jira/browse/KAFKA-1451
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Maciek Makowski
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1451.patch, KAFKA-1451_2014-07-28_20:27:32.patch, 
> KAFKA-1451_2014-07-29_10:13:23.patch
>
>
> h3. Symptoms
> The broker does not become available due to being stuck in an infinite loop 
> while electing leader. This can be recognised by the following line being 
> repeatedly written to server.log:
> {code}
> [2014-05-14 04:35:09,187] INFO I wrote this conflicted ephemeral node 
> [{"version":1,"brokerid":1,"timestamp":"1400060079108"}] at /controller a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> {code}
> h3. Steps to Reproduce
> In a single kafka 0.8.1.1 node, single zookeeper 3.4.6 (but will likely 
> behave the same with the ZK version included in Kafka distribution) node 
> setup:
> # start both zookeeper and kafka (in any order)
> # stop zookeeper
> # stop kafka
> # start kafka
> # start zookeeper
> h3. Likely Cause
> {{ZookeeperLeaderElector}} subscribes to data changes on startup, and then 
> triggers an election. if the deletion of ephemeral {{/controller}} node 
> associated with previous zookeeper session of the broker happens after 
> subscription to changes in new session, election will be invoked twice, once 
> from {{startup}} and once from {{handleDataDeleted}}:
> * {{startup}}: acquire {{controllerLock}}
> * {{startup}}: subscribe to data changes
> * zookeeper: delete {{/controller}} since the session that created it timed 
> out
> * {{handleDataDeleted}}: {{/controller}} was deleted
> * {{handleDataDeleted}}: wait on {{controllerLock}}
> * {{startup}}: elect -- writes {{/controller}}
> * {{startup}}: release {{controllerLock}}
> * {{handleDataDeleted}}: acquire {{controllerLock}}
> * {{handleDataDeleted}}: elect -- attempts to write {{/controller}} and then 
> gets into infinite loop as a result of conflict
> {{createEphemeralPathExpectConflictHandleZKBug}} assumes that the existing 
> znode was written from different session, which is not true in this case; it 
> was written from the same session. That adds to the confusion.
> h3. Suggested Fix
> In {{ZookeeperLeaderElector.startup}} first run {{elect}} and then subscribe 
> to data changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1451) Broker stuck due to leader election race

2015-11-19 Thread Zach Cox (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013883#comment-15013883
 ] 

Zach Cox commented on KAFKA-1451:
-

We experienced this yesterday on a 3-node 0.8.2.1 cluster, which caused a major 
outage for several hours. Restarting Kafka brokers several times, along with 
restarting Zookeeper nodes, did not resolve the issue. We identified one of the 
brokers that seemed to be going in/out of ISRs repeatedly, and ended up 
deleting all of its state on disk & restarting it. This was the only thing that 
finally resolved the issue. Maybe there was some corrupt state on that broker's 
disk? We still have that broker's state (moved its data dir, didn't actually 
delete) if that is helpful at all.

> Broker stuck due to leader election race 
> -
>
> Key: KAFKA-1451
> URL: https://issues.apache.org/jira/browse/KAFKA-1451
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Maciek Makowski
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1451.patch, KAFKA-1451_2014-07-28_20:27:32.patch, 
> KAFKA-1451_2014-07-29_10:13:23.patch
>
>
> h3. Symptoms
> The broker does not become available due to being stuck in an infinite loop 
> while electing leader. This can be recognised by the following line being 
> repeatedly written to server.log:
> {code}
> [2014-05-14 04:35:09,187] INFO I wrote this conflicted ephemeral node 
> [{"version":1,"brokerid":1,"timestamp":"1400060079108"}] at /controller a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> {code}
> h3. Steps to Reproduce
> In a single kafka 0.8.1.1 node, single zookeeper 3.4.6 (but will likely 
> behave the same with the ZK version included in Kafka distribution) node 
> setup:
> # start both zookeeper and kafka (in any order)
> # stop zookeeper
> # stop kafka
> # start kafka
> # start zookeeper
> h3. Likely Cause
> {{ZookeeperLeaderElector}} subscribes to data changes on startup, and then 
> triggers an election. if the deletion of ephemeral {{/controller}} node 
> associated with previous zookeeper session of the broker happens after 
> subscription to changes in new session, election will be invoked twice, once 
> from {{startup}} and once from {{handleDataDeleted}}:
> * {{startup}}: acquire {{controllerLock}}
> * {{startup}}: subscribe to data changes
> * zookeeper: delete {{/controller}} since the session that created it timed 
> out
> * {{handleDataDeleted}}: {{/controller}} was deleted
> * {{handleDataDeleted}}: wait on {{controllerLock}}
> * {{startup}}: elect -- writes {{/controller}}
> * {{startup}}: release {{controllerLock}}
> * {{handleDataDeleted}}: acquire {{controllerLock}}
> * {{handleDataDeleted}}: elect -- attempts to write {{/controller}} and then 
> gets into infinite loop as a result of conflict
> {{createEphemeralPathExpectConflictHandleZKBug}} assumes that the existing 
> znode was written from different session, which is not true in this case; it 
> was written from the same session. That adds to the confusion.
> h3. Suggested Fix
> In {{ZookeeperLeaderElector.startup}} first run {{elect}} and then subscribe 
> to data changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Documentation improvements

2015-11-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/550


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2423) Introduce Scalastyle

2015-11-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2423:
---
Assignee: Grant Henke  (was: Ismael Juma)

> Introduce Scalastyle
> 
>
> Key: KAFKA-2423
> URL: https://issues.apache.org/jira/browse/KAFKA-2423
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Grant Henke
>
> This is similar to Checkstyle (which we already use), but for Scala:
> http://www.scalastyle.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2423) Introduce Scalastyle

2015-11-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013652#comment-15013652
 ] 

Ismael Juma commented on KAFKA-2423:


I assigned it to you [~granthenke] since you are working on it. :)

> Introduce Scalastyle
> 
>
> Key: KAFKA-2423
> URL: https://issues.apache.org/jira/browse/KAFKA-2423
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Grant Henke
>
> This is similar to Checkstyle (which we already use), but for Scala:
> http://www.scalastyle.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-19 Thread netcafe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013639#comment-15013639
 ] 

netcafe commented on KAFKA-2843:


I grep -i 'deleted brokers' *.log  on all 3 kafka brokers, did not found leader 
failover at that time. 


> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>Assignee: jin xing
>
> I use simple consumer fetch message from brokers (fetchSize > 
> messageSize),when consumer got empty messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(topic, partition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2423) Introduce Scalastyle

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013632#comment-15013632
 ] 

ASF GitHub Bot commented on KAFKA-2423:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/560

KAFKA-2423: Introduce Scalastyle

Just the buildscript changes and rules configuration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka scalastyle

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/560.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #560


commit c164bf032ef48ce0a69b45ef45336e7a04334edc
Author: Grant Henke 
Date:   2015-11-19T14:40:58Z

KAFKA-2423: Introduce Scalastyle

Just the buildscript changes and rules configuration.




> Introduce Scalastyle
> 
>
> Key: KAFKA-2423
> URL: https://issues.apache.org/jira/browse/KAFKA-2423
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> This is similar to Checkstyle (which we already use), but for Scala:
> http://www.scalastyle.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2423: Introduce Scalastyle

2015-11-19 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/560

KAFKA-2423: Introduce Scalastyle

Just the buildscript changes and rules configuration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka scalastyle

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/560.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #560


commit c164bf032ef48ce0a69b45ef45336e7a04334edc
Author: Grant Henke 
Date:   2015-11-19T14:40:58Z

KAFKA-2423: Introduce Scalastyle

Just the buildscript changes and rules configuration.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2227) Delete me

2015-11-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2227:
---
Fix Version/s: (was: 0.9.0.0)

> Delete me
> -
>
> Key: KAFKA-2227
> URL: https://issues.apache.org/jira/browse/KAFKA-2227
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Andrii Biletskyi
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2228) Delete me

2015-11-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2228:
---
Fix Version/s: (was: 0.9.0.0)

> Delete me
> -
>
> Key: KAFKA-2228
> URL: https://issues.apache.org/jira/browse/KAFKA-2228
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Andrii Biletskyi
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2643) Run mirror maker tests in ducktape with SSL and SASL

2015-11-19 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-2643:
--
Summary: Run mirror maker tests in ducktape with SSL and SASL  (was: Run 
mirror maker tests in ducktape with SSL)

> Run mirror maker tests in ducktape with SSL and SASL
> 
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2861) system tests: grep logs for errors as part of validation

2015-11-19 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013227#comment-15013227
 ] 

Ewen Cheslack-Postava commented on KAFKA-2861:
--

[~geoffra] Of course, the problem with this is that if you intentionally 
trigger an error you'll also fail the test...

I've thought about this before, but it's really difficult to generically detect 
these issues -- log levels aren't good enough, trying to find stack traces 
(i.e. logging an exception) doesn't work, etc.

> system tests: grep logs for errors as part of validation
> 
>
> Key: KAFKA-2861
> URL: https://issues.apache.org/jira/browse/KAFKA-2861
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>
> There may be errors going on under the hood that validation steps do not 
> detect, but which are logged at the ERROR level by brokers or clients. We are 
> more likely to catch subtle issues if we pattern match the server log for 
> ERROR as part of validation, and fail the test in this case.
> For example, in https://issues.apache.org/jira/browse/KAFKA-2813, the error 
> is transient, so our test may pass; however, we still want this issue to be 
> visible.
> To avoid spurious failures, we would probably want to be able to have a 
> whitelist of acceptable errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2643) Run mirror maker tests in ducktape with SSL

2015-11-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013171#comment-15013171
 ] 

ASF GitHub Bot commented on KAFKA-2643:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/559

KAFKA-2643: Run mirror maker ducktape tests with SSL and SASL

Run tests with SSL, SASL_PLAINTEXT and SASL_SSL. Same security protocol is 
used for source and target Kafka.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2643

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/559.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #559


commit 6b50682201d9b989f0b9332bbd0d4ed9333938ee
Author: Rajini Sivaram 
Date:   2015-11-19T08:47:15Z

KAFKA-2643: Run mirror maker ducktape tests with SSL and SASL




> Run mirror maker tests in ducktape with SSL
> ---
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2643: Run mirror maker ducktape tests wi...

2015-11-19 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/559

KAFKA-2643: Run mirror maker ducktape tests with SSL and SASL

Run tests with SSL, SASL_PLAINTEXT and SASL_SSL. Same security protocol is 
used for source and target Kafka.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2643

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/559.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #559


commit 6b50682201d9b989f0b9332bbd0d4ed9333938ee
Author: Rajini Sivaram 
Date:   2015-11-19T08:47:15Z

KAFKA-2643: Run mirror maker ducktape tests with SSL and SASL




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2627) Kafka Heap Size increase impact performance badly

2015-11-19 Thread Allen Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013129#comment-15013129
 ] 

Allen Chan commented on KAFKA-2627:
---

I am also experiencing the same issue. 
Allocated 10G to heap and getting the same gc errors in gc.log even though 9GB 
of heap is unused. 

> Kafka Heap Size increase impact performance badly
> -
>
> Key: KAFKA-2627
> URL: https://issues.apache.org/jira/browse/KAFKA-2627
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: CentOS Linux release 7.0.1406 (Core)
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/";
> BUG_REPORT_URL="https://bugs.centos.org/";
> CentOS Linux release 7.0.1406 (Core)
> CentOS Linux release 7.0.1406 (Core)
>Reporter: Mihir Pandya
>
> Initial Kafka server was configured with 
> KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
> As we have high resource to utilize, we changed it to below value 
> KAFKA_HEAP_OPTS="-Xmx16G -Xms8G"
> Change highly impacted Kafka & Zookeeper, we started getting various issue at 
> both end.
> We were not getting all replica in ISR. And it was an issue with Leader 
> Selection which in-turn throwing Socket Connection Error.
> To debug, we checked kafaServer-gc.log, we were getting GC(Allocation 
> Failure) though we have lot more Memory is avalable.
> == GC Error ===
> 2015-10-08T09:43:08.796+: 4.651: [GC (Allocation Failure) 4.651: [ParNew: 
> 272640K->7265K(306688K), 0.0277514 secs] 272640K->7265K(1014528K), 0.0281243 
> secs] [Times: user=0.03 sys=0.05, real=0.03 secs]
> 2015-10-08T09:43:11.317+: 7.172: [GC (Allocation Failure) 7.172: [ParNew: 
> 279905K->3793K(306688K), 0.0157898 secs] 279905K->3793K(1014528K), 0.0159913 
> secs] [Times: user=0.03 sys=0.01, real=0.02 secs]
> 2015-10-08T09:43:13.522+: 9.377: [GC (Allocation Failure) 9.377: [ParNew: 
> 276433K->2827K(306688K), 0.0064236 secs] 276433K->2827K(1014528K), 0.0066834 
> secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
> 2015-10-08T09:43:15.518+: 11.372: [GC (Allocation Failure) 11.373: 
> [ParNew: 275467K->3090K(306688K), 0.0055454 secs] 275467K->3090K(1014528K), 
> 0.0057979 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 2015-10-08T09:43:17.558+: 13.412: [GC (Allocation Failure) 13.412: 
> [ParNew: 275730K->3346K(306688K), 0.0053757 secs] 275730K->3346K(1014528K), 
> 0.0055039 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 
> = Other Kafka Errors =
> [2015-10-01 15:35:19,039] INFO conflict in /brokers/ids/3 data: 
> {"jmx_port":-1,"timestamp":"1443709506024","host":"","version":1,"port":9092}
>  stored data: 
> {"jmx_port":-1,"timestamp":"1443702430352","host":"","version":1,"port":9092}
>  (kafka.utils.ZkUtils$)
> [2015-10-01 15:35:19,042] INFO I wrote this conflicted ephemeral node 
> [{"jmx_port":-1,"timestamp":"1443709506024","host":"","version":1,"port":9092}]
>  at /brokers/ids/3 a while back in a different session, hence I will backoff 
> for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> [2015-10-01 15:23:12,378] INFO Closing socket connection to /172.28.72.162. 
> (kafka.network.Processor)
> [2015-10-01 15:23:12,378] INFO Closing socket connection to /172.28.72.162. 
> (kafka.network.Processor)
> [2015-10-01 15:21:53,831] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:21:53,834] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:21:53,835] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:21:53,837] ERROR [ReplicaFetcherThread-4-1], Error for 
> partition [workorder-topic,1] to broker 1:class 
> kafka.common.NotLeaderForPartitionException 
> (kafka.server.ReplicaFetcherThread)
> [2015-10-01 15:20:36,210] WARN [ReplicaFetcherThread-0-2], Error in fetch 
> Name: FetchRequest; Version: 0; CorrelationId: 9; ClientId: 
> ReplicaFetcherThread-0-2; ReplicaId: 3; MaxWait: 500 ms; MinBytes: 1 bytes; 
> RequestInfo: [__consumer_offsets,17] -> 
> PartitionFetchInfo(0,1048576),[__consumer_offsets,23] -> 
> PartitionFetchInfo(0,1048576),[__consumer_offsets,29] -> 
> PartitionFetchInfo(0,1048576),[__consumer_offsets,35