[jira] [Resolved] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-3198.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 858
[https://github.com/apache/kafka/pull/858]

> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131885#comment-15131885
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/858


> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3198: Ticket Renewal Thread exits premat...

2016-02-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/858


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #343

2016-02-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fix restoring for source KTable

--
[...truncated 4695 lines...]

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSendRecordsRetries 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
SourceTask.initialize(): expected: 1, actual: 1
SourceTask.stop(): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
at 
org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testSlowTaskStart(WorkerSourceTaskTest.java:288)

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.ap

Build failed in Jenkins: kafka-trunk-jdk7 #1015

2016-02-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fix restoring for source KTable

--
[...truncated 5040 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest

[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131814#comment-15131814
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

GitHub user kunickiaj reopened a pull request:

https://github.com/apache/kafka/pull/858

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted c…

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison

The >= should be < since we are actually able to renew if the renewTill 
time is later than the current ticket expiration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #858


commit 65b10cd2e0bae97833be5459c29953695d8d396a
Author: Adam Kunicki 
Date:   2016-02-03T18:35:29Z

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison




> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3198: Ticket Renewal Thread exits premat...

2016-02-03 Thread kunickiaj
GitHub user kunickiaj reopened a pull request:

https://github.com/apache/kafka/pull/858

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted c…

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison

The >= should be < since we are actually able to renew if the renewTill 
time is later than the current ticket expiration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #858


commit 65b10cd2e0bae97833be5459c29953695d8d396a
Author: Adam Kunicki 
Date:   2016-02-03T18:35:29Z

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3198: Ticket Renewal Thread exits premat...

2016-02-03 Thread kunickiaj
Github user kunickiaj closed the pull request at:

https://github.com/apache/kafka/pull/858


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131813#comment-15131813
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

Github user kunickiaj closed the pull request at:

https://github.com/apache/kafka/pull/858


> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Pin to system tests to ducktape 0.3.10

2016-02-03 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/863

MINOR: Pin to system tests to ducktape 0.3.10



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka increment-ducktape-0.9.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/863.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #863


commit 053548a81024aaf6fffc90499a24b32785e390cd
Author: Geoff Anderson 
Date:   2016-02-03T23:39:12Z

Pin to system tests to ducktape 0.3.10




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3203) Add UnknownCodecException and UnknownMagicByteException to error mapping

2016-02-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131805#comment-15131805
 ] 

Grant Henke commented on KAFKA-3203:


[~becket_qin] I couldn't find anywhere that UnknownMagicByteException was used 
or thrown. Perhaps it should just be removed as part of this patch. 

I wan't to be careful not to add too many error codes to the map. Would it make 
sense to consider a UnknownCodecException as a corrupt message, mapping to 
error code 2?

> Add UnknownCodecException and UnknownMagicByteException to error mapping
> 
>
> Key: KAFKA-3203
> URL: https://issues.apache.org/jira/browse/KAFKA-3203
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
> Fix For: 0.10.0.0, 0.9.1.0
>
>
> Currently most of the exceptions to user have an error code. While 
> UnknownCodecException and UnknownMagicByteException can also be thrown to 
> client, broker does not have error mapping for them, so clients will only 
> receive UnknownServerException, which is vague.
> We should create those two exceptions in client package and add them to error 
> mapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Fix restoring for source KTable

2016-02-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/860


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3203) Add UnknownCodecException and UnknownMagicByteException to error mapping

2016-02-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3203:
--

Assignee: Grant Henke

> Add UnknownCodecException and UnknownMagicByteException to error mapping
> 
>
> Key: KAFKA-3203
> URL: https://issues.apache.org/jira/browse/KAFKA-3203
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
> Fix For: 0.10.0.0, 0.9.1.0
>
>
> Currently most of the exceptions to user have an error code. While 
> UnknownCodecException and UnknownMagicByteException can also be thrown to 
> client, broker does not have error mapping for them, so clients will only 
> receive UnknownServerException, which is vague.
> We should create those two exceptions in client package and add them to error 
> mapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread Adam Kunicki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Kunicki updated KAFKA-3199:

Status: Patch Available  (was: In Progress)

https://github.com/apache/kafka/pull/862

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread Adam Kunicki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Kunicki updated KAFKA-3199:

Comment: was deleted

(was: https://github.com/apache/kafka/pull/862)

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3199: LoginManager should allow using an...

2016-02-03 Thread kunickiaj
GitHub user kunickiaj opened a pull request:

https://github.com/apache/kafka/pull/862

KAFKA-3199: LoginManager should allow using an existing Subject

One possible solution which doesn't require a new configuration parameter:
But it assumes that if there is already a Subject you want to use its 
existing credentials, and not login from another keytab specified by 
kafka_client_jaas.conf.

Because this makes the jaas.conf no longer required, a missing KafkaClient 
context is no longer an error, but merely a warning.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3199

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #862


commit 83fcc6c6150e9b22d82573b9935ee43a0692ffa4
Author: Adam Kunicki 
Date:   2016-02-04T02:35:11Z

KAFKA-3199: LoginManager should allow using an existing Subject




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131609#comment-15131609
 ] 

ASF GitHub Bot commented on KAFKA-3199:
---

GitHub user kunickiaj opened a pull request:

https://github.com/apache/kafka/pull/862

KAFKA-3199: LoginManager should allow using an existing Subject

One possible solution which doesn't require a new configuration parameter:
But it assumes that if there is already a Subject you want to use its 
existing credentials, and not login from another keytab specified by 
kafka_client_jaas.conf.

Because this makes the jaas.conf no longer required, a missing KafkaClient 
context is no longer an error, but merely a warning.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3199

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/862.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #862


commit 83fcc6c6150e9b22d82573b9935ee43a0692ffa4
Author: Adam Kunicki 
Date:   2016-02-04T02:35:11Z

KAFKA-3199: LoginManager should allow using an existing Subject




> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread Adam Kunicki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3199 started by Adam Kunicki.
---
> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread Adam Kunicki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131607#comment-15131607
 ] 

Adam Kunicki commented on KAFKA-3199:
-

The contract would then become such that if a Subject already exists, Kafka has 
to assume any required credentials are already present and would potentially 
ignore what was specified in the kafka_client_jaas.conf

If you're OK with this then I believe it's safe.

But having another configuration, if present, we no longer have to make 
assumptions about the user's intent. (Use existing credentials, or use the 
settings found in kafka_client_jaas.conf (ignoring the existing credentials).

I'll open a pull request with a working example (without additional config 
param)

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3203) Add UnknownCodecException and UnknownMagicByteException to error mapping

2016-02-03 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-3203:
---

 Summary: Add UnknownCodecException and UnknownMagicByteException 
to error mapping
 Key: KAFKA-3203
 URL: https://issues.apache.org/jira/browse/KAFKA-3203
 Project: Kafka
  Issue Type: Bug
  Components: clients, core
Affects Versions: 0.9.0.0
Reporter: Jiangjie Qin
 Fix For: 0.10.0.0, 0.9.1.0


Currently most of the exceptions to user have an error code. While 
UnknownCodecException and UnknownMagicByteException can also be thrown to 
client, broker does not have error mapping for them, so clients will only 
receive UnknownServerException, which is vague.

We should create those two exceptions in client package and add them to error 
mapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2016-02-03 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131536#comment-15131536
 ] 

Jiangjie Qin commented on KAFKA-3177:
-

[~hachikuji] The two categories makes sense. As you said it is possible that 
the same exception can sometimes be ephemeral and sometimes be permanent 
depending on how "ephemeral" they are. It seems related to KAFKA-2391. Do you 
think it would be useful to add some timeout to the blocking calls?

BTW, in this ticket I am currently planning to only fix the position() call 
issue. Please let me know if you prefer a bigger operation on the exceptions.

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1014

2016-02-03 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3194: Validate security.inter.broker.protocol against the adver…

--
[...truncated 4723 lines...]

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavaNote: Some input files use unchecked or unsafe 
operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSourceConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSinkConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateAndStop PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.WorkerSource

[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131507#comment-15131507
 ] 

Jiangjie Qin commented on KAFKA-3197:
-

Hey Jay,

Yes, idempotent producer would solve the problem. And I completely agree that 
when people set in flight request to one they are expecting no re-ordering. 

I was initially thinking of treating in.flight.request.per.connection=1 as 
in.flight.batch.per.partition=1 implicitly needed. This does not need 
additional configuration. But there is a subtle difference in terms of 
performance. If a producer has a lot partitions to send to the same broker, 
theoretically we can allow in flight request > 1 as long as each request 
addresses distinct partitions. If we enforce in.flight.request=1, we lose this 
parallelism. But given this is what already there, so it is probably fine to 
leave it as is.

I'll update the patch to remove the newly added configuration but simply reuse 
in.flight.request.per.connection. Otherwise please let me know if you think the 
subtle optimization worth the configuration change.

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3186) Kafka authorizer should be aware of principal types it supports.

2016-02-03 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3186:
--
Status: Patch Available  (was: Open)

> Kafka authorizer should be aware of principal types it supports.
> 
>
> Key: KAFKA-3186
> URL: https://issues.apache.org/jira/browse/KAFKA-3186
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently, Kafka authorizer is agnostic of principal types it supports, so 
> are the acls CRUD methods in {{kafka.security.auth.Authorizer}}. The intent 
> behind is to keep Kafka authorization pluggable, which is really great. 
> However, this leads to following issues.
> 1. {{kafka-acls.sh}} supports pluggable authorizer and custom principals, 
> however is some what integrated with {{SimpleAclsAuthorizer}}. The help 
> messages has details which might not be true for a custom authorizer. For 
> instance, assuming User is a supported PrincipalType.
> 2. Acls CRUD methods perform no check on validity of acls, as they are not 
> aware of what principal types the support. This opens up space for lots of 
> user errors, KAFKA-3097 is an instance.
> I suggest we add a {{getSupportedPrincipalTypes}} method to authorizer and 
> use that for acls verification during acls CRUD, and make {{kafka-acls.sh}} 
> help messages more generic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3186) Kafka authorizer should be aware of principal types it supports.

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131490#comment-15131490
 ] 

ASF GitHub Bot commented on KAFKA-3186:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/861

KAFKA-3186: Make Kafka authorizer aware of principal types it supports.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3186

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #861


commit 0617c75ee808d58009afdaf976fc1ae455cfb2e8
Author: Ashish Singh 
Date:   2016-02-04T01:06:29Z

KAFKA-3186: Make Kafka authorizer aware of principal types it supports.




> Kafka authorizer should be aware of principal types it supports.
> 
>
> Key: KAFKA-3186
> URL: https://issues.apache.org/jira/browse/KAFKA-3186
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently, Kafka authorizer is agnostic of principal types it supports, so 
> are the acls CRUD methods in {{kafka.security.auth.Authorizer}}. The intent 
> behind is to keep Kafka authorization pluggable, which is really great. 
> However, this leads to following issues.
> 1. {{kafka-acls.sh}} supports pluggable authorizer and custom principals, 
> however is some what integrated with {{SimpleAclsAuthorizer}}. The help 
> messages has details which might not be true for a custom authorizer. For 
> instance, assuming User is a supported PrincipalType.
> 2. Acls CRUD methods perform no check on validity of acls, as they are not 
> aware of what principal types the support. This opens up space for lots of 
> user errors, KAFKA-3097 is an instance.
> I suggest we add a {{getSupportedPrincipalTypes}} method to authorizer and 
> use that for acls verification during acls CRUD, and make {{kafka-acls.sh}} 
> help messages more generic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3186: Make Kafka authorizer aware of pri...

2016-02-03 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/861

KAFKA-3186: Make Kafka authorizer aware of principal types it supports.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3186

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #861


commit 0617c75ee808d58009afdaf976fc1ae455cfb2e8
Author: Ashish Singh 
Date:   2016-02-04T01:06:29Z

KAFKA-3186: Make Kafka authorizer aware of principal types it supports.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Fix restoring for source KTable

2016-02-03 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/860

MINOR: Fix restoring for source KTable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KRestoreChangelog

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #860


commit 8c6c6bf6069de99a77f604361515222e43d03f81
Author: Guozhang Wang 
Date:   2016-02-04T00:40:47Z

fix restoring for KTable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131451#comment-15131451
 ] 

Jay Kreps commented on KAFKA-3197:
--

Would it be better to treat this more as a bug than a configurable thing in the 
in-flight=1 case? i.e. when would i have in-flight=1 and not want the 
reordering protection? I agree people are depending on that now.

Slightly longer term I think we are actively picking up that 
idempotence/txn/semantics line of work and I think it is possible that whatever 
is done for idempotence might be the more principled solution as it could solve 
this problem even in the presence of pipelining. The idea here is that there is 
a sequence number per-partition which the server uses to dedupe, and this 
ensures that if one request fails all other pipelined requests on that 
partition also fail (and then retry idempotently) so that you don't reorder 
irrespective of the depth of the pipelining.

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-02-03 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131425#comment-15131425
 ] 

Jiangjie Qin commented on KAFKA-3188:
-

[~apovzner] Thanks for the suggestion. I just split the system test to three 
tickets.

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-02-03 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-3202:
---

 Summary: Add system test for KIP-31 and KIP-32 - Change message 
format version on the fly
 Key: KAFKA-3202
 URL: https://issues.apache.org/jira/browse/KAFKA-3202
 Project: Kafka
  Issue Type: Sub-task
  Components: system tests
Reporter: Jiangjie Qin
 Fix For: 0.10.0.0


The system test should cover the case that message format changes are made when 
clients are producing/consuming. The message format change should not cause 
client side issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-02-03 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3188:

Summary: Add system test for KIP-31 and KIP-32 - Compatibility Test  (was: 
Add integration test for KIP-31 and KIP-32 - Compatibility Test)

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3188) Add integration test for KIP-31 and KIP-32 - Compatibility Test

2016-02-03 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3188:

Description: The integration test should test the compatibility between 
0.10.0 broker with clients on older versions. The clients version should 
include 0.9.0 and 0.8.x.  (was: The integration test should cover the 
followings:
1. Compatibility test.
2. Upgrade test
3. Changing message format type on the fly.)

> Add integration test for KIP-31 and KIP-32 - Compatibility Test
> ---
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3188) Add integration test for KIP-31 and KIP-32 - Compatibility Test

2016-02-03 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3188:

Summary: Add integration test for KIP-31 and KIP-32 - Compatibility Test  
(was: Add integration test for KIP-31 and KIP-32)

> Add integration test for KIP-31 and KIP-32 - Compatibility Test
> ---
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> The integration test should cover the followings:
> 1. Compatibility test.
> 2. Upgrade test
> 3. Changing message format type on the fly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3201) Add system test for KIP-31 and KIP-32 - Upgrade Test

2016-02-03 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-3201:
---

 Summary: Add system test for KIP-31 and KIP-32 - Upgrade Test
 Key: KAFKA-3201
 URL: https://issues.apache.org/jira/browse/KAFKA-3201
 Project: Kafka
  Issue Type: Sub-task
  Components: system tests
Reporter: Jiangjie Qin
 Fix For: 0.10.0.0


This system test should test the procedure to upgrade a Kafka broker from 0.8.x 
and 0.9.0 to 0.10.0

The procedure is documented in KIP-32:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3198:
---
Fix Version/s: 0.9.0.1

> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131409#comment-15131409
 ] 

Jiangjie Qin commented on KAFKA-3197:
-

[~enothereska] We have already defined sync and async in the producer at per 
message level when user call send(). Having another configuration is a little 
confusing. If the only purpose for send "sync" is to send messages in order, 
making it clear in the configuration seems reasonable.

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3194) Validate security.inter.broker.protocol against the advertised.listeners protocols

2016-02-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3194:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Validate security.inter.broker.protocol against the advertised.listeners 
> protocols
> --
>
> Key: KAFKA-3194
> URL: https://issues.apache.org/jira/browse/KAFKA-3194
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> When testing Kafka I found that Kafka can run in a very unhealthy state due 
> to a misconfigured security.inter.broker.protocol. There are errors in the 
> log such (shown below) but it would be better to prevent startup with a clear 
> error message in this scenario.
> Sample error in the server logs:
> {code}
> ERROR kafka.controller.ReplicaStateMachine$BrokerChangeListener: 
> [BrokerChangeListener on Controller 71]: Error while handling broker changes
> kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not 
> found for broker 69
>   at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:141)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:88)
>   at 
> kafka.controller.ControllerChannelManager.addBroker(ControllerChannelManager.scala:73)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix broken WorkerSourceTask test

2016-02-03 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/859

HOTFIX: fix broken WorkerSourceTask test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka hotfix-worker-source-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #859


commit 02c79c1fe7c5ec6bb5e776c482b23c81e42b5177
Author: Jason Gustafson 
Date:   2016-02-03T23:47:58Z

HOTFIX: fix broken WorkerSourceTask test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3194) Validate security.inter.broker.protocol against the advertised.listeners protocols

2016-02-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3194:

Fix Version/s: 0.9.1.0

> Validate security.inter.broker.protocol against the advertised.listeners 
> protocols
> --
>
> Key: KAFKA-3194
> URL: https://issues.apache.org/jira/browse/KAFKA-3194
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> When testing Kafka I found that Kafka can run in a very unhealthy state due 
> to a misconfigured security.inter.broker.protocol. There are errors in the 
> log such (shown below) but it would be better to prevent startup with a clear 
> error message in this scenario.
> Sample error in the server logs:
> {code}
> ERROR kafka.controller.ReplicaStateMachine$BrokerChangeListener: 
> [BrokerChangeListener on Controller 71]: Error while handling broker changes
> kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not 
> found for broker 69
>   at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:141)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:88)
>   at 
> kafka.controller.ControllerChannelManager.addBroker(ControllerChannelManager.scala:73)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3194) Validate security.inter.broker.protocol against the advertised.listeners protocols

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131393#comment-15131393
 ] 

ASF GitHub Bot commented on KAFKA-3194:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/851


> Validate security.inter.broker.protocol against the advertised.listeners 
> protocols
> --
>
> Key: KAFKA-3194
> URL: https://issues.apache.org/jira/browse/KAFKA-3194
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> When testing Kafka I found that Kafka can run in a very unhealthy state due 
> to a misconfigured security.inter.broker.protocol. There are errors in the 
> log such (shown below) but it would be better to prevent startup with a clear 
> error message in this scenario.
> Sample error in the server logs:
> {code}
> ERROR kafka.controller.ReplicaStateMachine$BrokerChangeListener: 
> [BrokerChangeListener on Controller 71]: Error while handling broker changes
> kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not 
> found for broker 69
>   at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:141)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:88)
>   at 
> kafka.controller.ControllerChannelManager.addBroker(ControllerChannelManager.scala:73)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3194: Validate security.inter.broker.pro...

2016-02-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/851


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131378#comment-15131378
 ] 

ASF GitHub Bot commented on KAFKA-3198:
---

GitHub user kunickiaj opened a pull request:

https://github.com/apache/kafka/pull/858

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted c…

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison

The >= should be < since we are actually able to renew if the renewTill 
time is later than the current ticket expiration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #858


commit 65b10cd2e0bae97833be5459c29953695d8d396a
Author: Adam Kunicki 
Date:   2016-02-03T18:35:29Z

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison




> Ticket Renewal Thread exits prematurely due to inverted comparison
> --
>
> Key: KAFKA-3198
> URL: https://issues.apache.org/jira/browse/KAFKA-3198
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> Line 152 of Login.java:
> {code}
> if (isUsingTicketCache && tgt.getRenewTill() != null && 
> tgt.getRenewTill().getTime() >= expiry) {
> {code}
> This line is used to determine whether to exit the thread and issue an error 
> to the user.
> The >= should be < since we are actually able to renew if the renewTill time 
> is later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3198: Ticket Renewal Thread exits premat...

2016-02-03 Thread kunickiaj
GitHub user kunickiaj opened a pull request:

https://github.com/apache/kafka/pull/858

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted c…

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison

The >= should be < since we are actually able to renew if the renewTill 
time is later than the current ticket expiration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kunickiaj/kafka KAFKA-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #858


commit 65b10cd2e0bae97833be5459c29953695d8d396a
Author: Adam Kunicki 
Date:   2016-02-03T18:35:29Z

KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted 
comparison




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131356#comment-15131356
 ] 

Gwen Shapira commented on KAFKA-3199:
-

Are there any drawbacks for simply checking if a valid subject exists? For 
example, could it cause any other system to fail for some reason?

If subject reuse is completely safe, I prefer not to add a configuration and 
give users yet-another-security-option to get confused about.

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-46: Self Healing

2016-02-03 Thread Neha Narkhede
Adi,

Thanks for the write-up. Here are my thoughts:

I think you are suggesting a way of automating resurrecting a topic’s
replication factor in the presence of a specific scenario: in the event of
permanent broker failures. I agree that the partition reassignment
mechanism should be used to add replicas when they are lost to permanent
broker failures. But I think the KIP probably chews off more than we can
digest.

Before we automate detection of permanent broker failures and have the
controller mitigate through automatic data balancing, I’d like to point out
that our current difficulty is not that but the ability to generate a
workable partition assignment for rebalancing data in a cluster.

There are 2 problems with partition rebalancing today:

   1. Lack of replica throttling for balancing data: In the absence of
   replica throttling, even if you come up with an assignment that might be
   workable, it isn’t practical to kick it off without worrying about bringing
   the entire cluster down. I don’t think the hack of moving partitions in
   batches is effective as it at-best a best guess.
   2. Lack of support for policies in the rebalance tool that automatically
   generate a workable partition assignment: There is no easy way to generate
   a partition reassignment JSON file. An example of a policy is “end up with
   an equal number of partitions on every broker while minimizing data
   movement”. There might be other policies that might make sense, we’d have
   to experiment.

Broadly speaking, the data balancing problem is comprised of 3 parts:

   1. Trigger: An event that triggers data balancing to take place. KIP-46
   suggests a specific trigger and that is permanent broker failure. But there
   might be several other events that might make sense — Cluster expansion,
   decommissioning brokers, data imbalance
   2. Policy: Given a set of constraints, generate a target partition
   assignment that can be executed when triggered.
   3. Mechanism: Given a partition assignment, make the state changes and
   actually move the data until the target assignment is achieved.

Currently, the trigger is manual through the rebalance tool, there is no
support for any viable policy today and we have a built-in mechanism that,
given a policy and upon a trigger, moves data in a cluster but does not
support throttling.

Given that both the policy and the throttling improvement to the mechanism
are hard problems and given our past experience of operationalizing
partition reassignment (required months of testing before we got it right),
I strongly recommend attacking this problem in stages. I think a more
practical approach would be to add the concept of pluggable policies in the
rebalance tool, implement a practical policy that generates a workable
partition assignment upon triggering the tool and improve the mechanism to
support throttling so that a given policy can succeed without manual
intervention. If we solved these problems first, the rebalance tool would
be much more accessible to Kafka users and operators.

Assuming that we do this, the problem that KIP-46 aims to solve becomes
much easier. You can separate the detection of permanent broker failures
(trigger) from the mitigation (above-mentioned improvements to data
balancing). The latter will be a native capability in Kafka. Detecting
permanent hardware failures is much easily done via an external script that
uses a simple health check. (Part 1 of KIP-46).

I agree that it will be great to *eventually* be able to fully automate
both the trigger as well as the policies while also improving the
mechanism. But I’m highly skeptical of big-bang approaches that go from a
completely manual and cumbersome process to a fully automated one,
especially when that involves large-scale data movement in a running
cluster. Once we stabilize these changes and feel confident that they work,
we can push the policy into the controller and have it automatically be
triggered based on different events.

Thanks,
Neha

On Tue, Feb 2, 2016 at 6:13 PM, Aditya Auradkar <
aaurad...@linkedin.com.invalid> wrote:

> Hey everyone,
>
> I just created a kip to discuss automated replica reassignment when we lose
> a broker in the cluster.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-46%3A+Self+Healing+Kafka
>
> Any feedback is welcome.
>
> Thanks,
> Aditya
>



-- 
Thanks,
Neha


Jenkins build is back to normal : kafka-trunk-jdk8 #341

2016-02-03 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3194) Validate security.inter.broker.protocol against the advertised.listeners protocols

2016-02-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131181#comment-15131181
 ] 

Grant Henke commented on KAFKA-3194:


[~junrao] [~ewencp] [~gwenshap] [~guozhang]
Would any of you have a chance to review this today?

> Validate security.inter.broker.protocol against the advertised.listeners 
> protocols
> --
>
> Key: KAFKA-3194
> URL: https://issues.apache.org/jira/browse/KAFKA-3194
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> When testing Kafka I found that Kafka can run in a very unhealthy state due 
> to a misconfigured security.inter.broker.protocol. There are errors in the 
> log such (shown below) but it would be better to prevent startup with a clear 
> error message in this scenario.
> Sample error in the server logs:
> {code}
> ERROR kafka.controller.ReplicaStateMachine$BrokerChangeListener: 
> [BrokerChangeListener on Controller 71]: Error while handling broker changes
> kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not 
> found for broker 69
>   at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:141)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:88)
>   at 
> kafka.controller.ControllerChannelManager.addBroker(ControllerChannelManager.scala:73)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$4.apply(ReplicaStateMachine.scala:372)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:372)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3200) MessageSet from broker seems invalid

2016-02-03 Thread Rajiv Kurian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131155#comment-15131155
 ] 

Rajiv Kurian commented on KAFKA-3200:
-

Got it.  Yeah it would be ideal to not have this problem. Also would be good to 
mention this in the guide, for people writing new clients.

> MessageSet from broker seems invalid
> 
>
> Key: KAFKA-3200
> URL: https://issues.apache.org/jira/browse/KAFKA-3200
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
> Environment: Linux,  running Oracle JVM 1.8
>Reporter: Rajiv Kurian
>
> I am writing a java consumer client for Kafka and using the protocol guide at 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  to parse buffers. I am currently running into a problem parsing certain 
> fetch responses. Many times it works fine but some other times it does not. 
> It might just be a bug with my implementation in which case I apologize.
> My messages are uncompressed and exactly 23 bytes in length and has null 
> keys. So each Message in my MessageSet is exactly size 4 (crc) + 
> 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) + 0 (key is null) + 
> 4(num_value_bytes) + 23(value_bytes) = 37 bytes.
> So each element of the MessageSet itself is exactly 37 (size of message) + 8 
> (offset) + 4 (message_size) = 49 bytes.
> In comparison an empty message set element should be of size 8 (offset) + 4 
> (message_size) + 4 (crc) + 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) 
> + 0 (key is null) + 4(num_value_bytes) + 0(value is null)  = 26 bytes
> I occasionally receive a MessageSet which says size is 1000. A size of 1000 
> is not divisible by my MessageSet element size which is 49 bytes. When I 
> parse such a message set I can actually read 20 of message set elements(49 
> bytes) which is 980 bytes. I have 20 extra bytes to parse now which is 
> actually less than even an empty message (26 bytes). At this moment I don't 
> know how to parse the messages any more.
> I will attach a file for a response that can actually cause me to run into 
> this problem as well as the sample ccde.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3200) MessageSet from broker seems invalid

2016-02-03 Thread Rajiv Kurian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131150#comment-15131150
 ] 

Rajiv Kurian commented on KAFKA-3200:
-

Not attaching code any more since this seems known. Glad I am not the first one 
facing this.

> MessageSet from broker seems invalid
> 
>
> Key: KAFKA-3200
> URL: https://issues.apache.org/jira/browse/KAFKA-3200
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
> Environment: Linux,  running Oracle JVM 1.8
>Reporter: Rajiv Kurian
>
> I am writing a java consumer client for Kafka and using the protocol guide at 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  to parse buffers. I am currently running into a problem parsing certain 
> fetch responses. Many times it works fine but some other times it does not. 
> It might just be a bug with my implementation in which case I apologize.
> My messages are uncompressed and exactly 23 bytes in length and has null 
> keys. So each Message in my MessageSet is exactly size 4 (crc) + 
> 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) + 0 (key is null) + 
> 4(num_value_bytes) + 23(value_bytes) = 37 bytes.
> So each element of the MessageSet itself is exactly 37 (size of message) + 8 
> (offset) + 4 (message_size) = 49 bytes.
> In comparison an empty message set element should be of size 8 (offset) + 4 
> (message_size) + 4 (crc) + 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) 
> + 0 (key is null) + 4(num_value_bytes) + 0(value is null)  = 26 bytes
> I occasionally receive a MessageSet which says size is 1000. A size of 1000 
> is not divisible by my MessageSet element size which is 49 bytes. When I 
> parse such a message set I can actually read 20 of message set elements(49 
> bytes) which is 980 bytes. I have 20 extra bytes to parse now which is 
> actually less than even an empty message (26 bytes). At this moment I don't 
> know how to parse the messages any more.
> I will attach a file for a response that can actually cause me to run into 
> this problem as well as the sample ccde.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3200) MessageSet from broker seems invalid

2016-02-03 Thread Rajiv Kurian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131148#comment-15131148
 ] 

Rajiv Kurian commented on KAFKA-3200:
-

Yeah I have a similar workaround but this seems unfortunate.

> MessageSet from broker seems invalid
> 
>
> Key: KAFKA-3200
> URL: https://issues.apache.org/jira/browse/KAFKA-3200
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
> Environment: Linux,  running Oracle JVM 1.8
>Reporter: Rajiv Kurian
>
> I am writing a java consumer client for Kafka and using the protocol guide at 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  to parse buffers. I am currently running into a problem parsing certain 
> fetch responses. Many times it works fine but some other times it does not. 
> It might just be a bug with my implementation in which case I apologize.
> My messages are uncompressed and exactly 23 bytes in length and has null 
> keys. So each Message in my MessageSet is exactly size 4 (crc) + 
> 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) + 0 (key is null) + 
> 4(num_value_bytes) + 23(value_bytes) = 37 bytes.
> So each element of the MessageSet itself is exactly 37 (size of message) + 8 
> (offset) + 4 (message_size) = 49 bytes.
> In comparison an empty message set element should be of size 8 (offset) + 4 
> (message_size) + 4 (crc) + 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) 
> + 0 (key is null) + 4(num_value_bytes) + 0(value is null)  = 26 bytes
> I occasionally receive a MessageSet which says size is 1000. A size of 1000 
> is not divisible by my MessageSet element size which is 49 bytes. When I 
> parse such a message set I can actually read 20 of message set elements(49 
> bytes) which is 980 bytes. I have 20 extra bytes to parse now which is 
> actually less than even an empty message (26 bytes). At this moment I don't 
> know how to parse the messages any more.
> I will attach a file for a response that can actually cause me to run into 
> this problem as well as the sample ccde.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1013

2016-02-03 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3092: Replace SinkTask onPartitionsAssigned/onPartitionsRevoked

[me] MINOR: Some more Kafka Streams Javadocs

--
[...truncated 6919 lines...]

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavaNote: Some input files use unchecked or unsafe 
operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSourceConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSinkConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateAndStop PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.

[jira] [Commented] (KAFKA-3200) MessageSet from broker seems invalid

2016-02-03 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131142#comment-15131142
 ] 

Dana Powers commented on KAFKA-3200:


the minimum Message payload size is 26 bytes (8 offset, 4 size, 14 for an 
'empty' message), so generally I would break if there are less than 26 bytes 
left and then also break if the decoded size is larger than the remaining 
buffer.

for reference, the code I wrote to handle message set decoding in kafka-python 
is here: 
https://github.com/dpkp/kafka-python/blob/master/kafka/protocol/message.py#L123-L158

> MessageSet from broker seems invalid
> 
>
> Key: KAFKA-3200
> URL: https://issues.apache.org/jira/browse/KAFKA-3200
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
> Environment: Linux,  running Oracle JVM 1.8
>Reporter: Rajiv Kurian
>
> I am writing a java consumer client for Kafka and using the protocol guide at 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  to parse buffers. I am currently running into a problem parsing certain 
> fetch responses. Many times it works fine but some other times it does not. 
> It might just be a bug with my implementation in which case I apologize.
> My messages are uncompressed and exactly 23 bytes in length and has null 
> keys. So each Message in my MessageSet is exactly size 4 (crc) + 
> 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) + 0 (key is null) + 
> 4(num_value_bytes) + 23(value_bytes) = 37 bytes.
> So each element of the MessageSet itself is exactly 37 (size of message) + 8 
> (offset) + 4 (message_size) = 49 bytes.
> In comparison an empty message set element should be of size 8 (offset) + 4 
> (message_size) + 4 (crc) + 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) 
> + 0 (key is null) + 4(num_value_bytes) + 0(value is null)  = 26 bytes
> I occasionally receive a MessageSet which says size is 1000. A size of 1000 
> is not divisible by my MessageSet element size which is 49 bytes. When I 
> parse such a message set I can actually read 20 of message set elements(49 
> bytes) which is 980 bytes. I have 20 extra bytes to parse now which is 
> actually less than even an empty message (26 bytes). At this moment I don't 
> know how to parse the messages any more.
> I will attach a file for a response that can actually cause me to run into 
> this problem as well as the sample ccde.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3200) MessageSet from broker seems invalid

2016-02-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131120#comment-15131120
 ] 

Grant Henke commented on KAFKA-3200:


There is a relevant discussion in the mailing list here: 
http://search-hadoop.com/m/uyzND1DLC9128gBxc&subj=Incomplete+Messages

As [~jkreps] mentions there, we may be able to fix it now. Perhaps now is a 
good time too with the changes going in for KIP-32 & KIP-33.

> MessageSet from broker seems invalid
> 
>
> Key: KAFKA-3200
> URL: https://issues.apache.org/jira/browse/KAFKA-3200
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
> Environment: Linux,  running Oracle JVM 1.8
>Reporter: Rajiv Kurian
>
> I am writing a java consumer client for Kafka and using the protocol guide at 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  to parse buffers. I am currently running into a problem parsing certain 
> fetch responses. Many times it works fine but some other times it does not. 
> It might just be a bug with my implementation in which case I apologize.
> My messages are uncompressed and exactly 23 bytes in length and has null 
> keys. So each Message in my MessageSet is exactly size 4 (crc) + 
> 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) + 0 (key is null) + 
> 4(num_value_bytes) + 23(value_bytes) = 37 bytes.
> So each element of the MessageSet itself is exactly 37 (size of message) + 8 
> (offset) + 4 (message_size) = 49 bytes.
> In comparison an empty message set element should be of size 8 (offset) + 4 
> (message_size) + 4 (crc) + 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) 
> + 0 (key is null) + 4(num_value_bytes) + 0(value is null)  = 26 bytes
> I occasionally receive a MessageSet which says size is 1000. A size of 1000 
> is not divisible by my MessageSet element size which is 49 bytes. When I 
> parse such a message set I can actually read 20 of message set elements(49 
> bytes) which is 980 bytes. I have 20 extra bytes to parse now which is 
> actually less than even an empty message (26 bytes). At this moment I don't 
> know how to parse the messages any more.
> I will attach a file for a response that can actually cause me to run into 
> this problem as well as the sample ccde.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3200) MessageSet from broker seems invalid

2016-02-03 Thread Rajiv Kurian (JIRA)
Rajiv Kurian created KAFKA-3200:
---

 Summary: MessageSet from broker seems invalid
 Key: KAFKA-3200
 URL: https://issues.apache.org/jira/browse/KAFKA-3200
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
 Environment: Linux,  running Oracle JVM 1.8
Reporter: Rajiv Kurian


I am writing a java consumer client for Kafka and using the protocol guide at 
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol 
to parse buffers. I am currently running into a problem parsing certain fetch 
responses. Many times it works fine but some other times it does not. It might 
just be a bug with my implementation in which case I apologize.

My messages are uncompressed and exactly 23 bytes in length and has null keys. 
So each Message in my MessageSet is exactly size 4 (crc) + 1(magic_bytes) + 1 
(attributes) + 4(key_num_bytes) + 0 (key is null) + 4(num_value_bytes) + 
23(value_bytes) = 37 bytes.

So each element of the MessageSet itself is exactly 37 (size of message) + 8 
(offset) + 4 (message_size) = 49 bytes.

In comparison an empty message set element should be of size 8 (offset) + 4 
(message_size) + 4 (crc) + 1(magic_bytes) + 1 (attributes) + 4(key_num_bytes) + 
0 (key is null) + 4(num_value_bytes) + 0(value is null)  = 26 bytes

I occasionally receive a MessageSet which says size is 1000. A size of 1000 is 
not divisible by my MessageSet element size which is 49 bytes. When I parse 
such a message set I can actually read 20 of message set elements(49 bytes) 
which is 980 bytes. I have 20 extra bytes to parse now which is actually less 
than even an empty message (26 bytes). At this moment I don't know how to parse 
the messages any more.

I will attach a file for a response that can actually cause me to run into this 
problem as well as the sample ccde.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Console Producer/Consumer Issue

2016-02-03 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131021#comment-15131021
 ] 

Vahid Hashemian commented on KAFKA-3129:


[~hachikuji] I attached debug logs from server.log for both success and failure 
scenarios. If there are other logs I should also look at please let me know.

> Console Producer/Consumer Issue
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3129) Console Producer/Consumer Issue

2016-02-03 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131005#comment-15131005
 ] 

Vahid Hashemian edited comment on KAFKA-3129 at 2/3/16 7:49 PM:


Attached server.log.abnormal.txt, which is the server.log entries for an 
unsuccessful console producer of 10,000 entries. Only 9,864 entries out of 
10,000 were produced.


was (Author: vahid):
server.log entries for an unsuccessful console producer of 10,000 entries. Only 
9,864 entries out of 10,000 were produced.

> Console Producer/Consumer Issue
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3129) Console Producer/Consumer Issue

2016-02-03 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130998#comment-15130998
 ] 

Vahid Hashemian edited comment on KAFKA-3129 at 2/3/16 7:49 PM:


Attached server.log.normal.txt, which is the server.log entries for a 
successful console producer of 10,000 entries.


was (Author: vahid):
server.log entries for a successful console producer of 10,000 entries.

> Console Producer/Consumer Issue
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3129) Console Producer/Consumer Issue

2016-02-03 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3129:
---
Attachment: server.log.abnormal.txt

server.log entries for an unsuccessful console producer of 10,000 entries. Only 
9,864 entries out of 10,000 were produced.

> Console Producer/Consumer Issue
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3129) Console Producer/Consumer Issue

2016-02-03 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3129:
---
Attachment: server.log.normal.txt

server.log entries for a successful console producer of 10,000 entries.

> Console Producer/Consumer Issue
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: some more Javadocs

2016-02-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/853


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3092) Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130955#comment-15130955
 ] 

ASF GitHub Bot commented on KAFKA-3092:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/815


> Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract
> -
>
> Key: KAFKA-3092
> URL: https://issues.apache.org/jira/browse/KAFKA-3092
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> The purpose of the onPartitionsRevoked() and onPartitionsAssigned() methods 
> exposed in Kafka Connect's SinkTask interface seems a little unclear and too 
> closely tied to consumer semantics. From the javadoc, these APIs are used to 
> open/close per-partition resources, but that would suggest that we should 
> always get one call to onPartitionsAssigned() before writing any records for 
> the corresponding partitions and one call to onPartitionsRevoked() when we 
> have finished with them. However, the same methods on the consumer are used 
> to indicate phases of the rebalance operation: onPartitionsRevoked() is 
> called before the rebalance begins and onPartitionsAssigned() is called after 
> it completes. In particular, the consumer does not guarantee a final call to 
> onPartitionsRevoked(). 
> This mismatch makes the contract of these methods unclear. In fact, the 
> WorkerSinkTask currently does not guarantee the initial call to 
> onPartitionsAssigned(), nor the final call to onPartitionsRevoked(). Instead, 
> the task implementation must pull the initial assignment from the 
> SinkTaskContext. To make it more confusing, the call to commit offsets 
> following onPartitionsRevoked() causes a flush() on a partition which had 
> already been revoked. All of this makes it difficult to use this API as 
> suggested in the javadocs.
> To fix this, we should clarify the behavior of these methods and consider 
> renaming them to avoid confusion with the same methods in the consumer API. 
> If onPartitionsAssigned() is meant for opening resources, maybe we can rename 
> it to open(). Similarly, onPartitionsRevoked() can be renamed to close(). We 
> can then fix the code to ensure that a typical open/close contract is 
> enforced. This would also mean removing the need to pass the initial 
> assignment in the SinkTaskContext. This would give the following API:
> {code}
> void open(Collection partitions);
> void close(Collection partitions);
> {code}
> We could also consider going a little further. Instead of depending on 
> onPartitionsAssigned() to open resources, tasks could open partition 
> resources on demand as records are received. In general, connectors will need 
> some way to close partition-specific resources, but there might not be any 
> need to pass the full list of partitions to close since the only open 
> resources should be those that have received writes since the last rebalance. 
> In this case, we just have a single method:
> {code}
> void close();
> {code}
> The downside to this is that the difference between close() and stop() then 
> becomes a little unclear.
> Obviously these are not compatible changes and connectors would have to be 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3092) Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract

2016-02-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3092.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 815
[https://github.com/apache/kafka/pull/815]

> Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract
> -
>
> Key: KAFKA-3092
> URL: https://issues.apache.org/jira/browse/KAFKA-3092
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> The purpose of the onPartitionsRevoked() and onPartitionsAssigned() methods 
> exposed in Kafka Connect's SinkTask interface seems a little unclear and too 
> closely tied to consumer semantics. From the javadoc, these APIs are used to 
> open/close per-partition resources, but that would suggest that we should 
> always get one call to onPartitionsAssigned() before writing any records for 
> the corresponding partitions and one call to onPartitionsRevoked() when we 
> have finished with them. However, the same methods on the consumer are used 
> to indicate phases of the rebalance operation: onPartitionsRevoked() is 
> called before the rebalance begins and onPartitionsAssigned() is called after 
> it completes. In particular, the consumer does not guarantee a final call to 
> onPartitionsRevoked(). 
> This mismatch makes the contract of these methods unclear. In fact, the 
> WorkerSinkTask currently does not guarantee the initial call to 
> onPartitionsAssigned(), nor the final call to onPartitionsRevoked(). Instead, 
> the task implementation must pull the initial assignment from the 
> SinkTaskContext. To make it more confusing, the call to commit offsets 
> following onPartitionsRevoked() causes a flush() on a partition which had 
> already been revoked. All of this makes it difficult to use this API as 
> suggested in the javadocs.
> To fix this, we should clarify the behavior of these methods and consider 
> renaming them to avoid confusion with the same methods in the consumer API. 
> If onPartitionsAssigned() is meant for opening resources, maybe we can rename 
> it to open(). Similarly, onPartitionsRevoked() can be renamed to close(). We 
> can then fix the code to ensure that a typical open/close contract is 
> enforced. This would also mean removing the need to pass the initial 
> assignment in the SinkTaskContext. This would give the following API:
> {code}
> void open(Collection partitions);
> void close(Collection partitions);
> {code}
> We could also consider going a little further. Instead of depending on 
> onPartitionsAssigned() to open resources, tasks could open partition 
> resources on demand as records are received. In general, connectors will need 
> some way to close partition-specific resources, but there might not be any 
> need to pass the full list of partitions to close since the only open 
> resources should be those that have received writes since the last rebalance. 
> In this case, we just have a single method:
> {code}
> void close();
> {code}
> The downside to this is that the difference between close() and stop() then 
> becomes a little unclear.
> Obviously these are not compatible changes and connectors would have to be 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3092: Replace SinkTask onPartitionsAssig...

2016-02-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/815


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-03 Thread Rajini Sivaram
Hi Harsha,

Thank you for the review. Can you clarify - I think you are saying that the
client should send its mechanism over the wire to the server. Is that
correct? The exchange is slightly different in the KIP (the PR matches the
KIP) from the one you described to enable interoperability with 0.9.0.0.


On Wed, Feb 3, 2016 at 1:56 PM, Harsha  wrote:

> Rajini,
>I looked at the PR you have. I think its better with your
>earlier approach rather than extending the protocol.
> What I was thinking initially is, Broker has a config option of say
> sasl.mechanism = GSSAPI, PLAIN
> and the client can have similar config of sasl.mechanism=PLAIN. Client
> can send its sasl mechanism before the handshake starts and if the
> broker accepts that particular mechanism than it can go ahead with
> handshake otherwise return a error saying that the mechanism not
> allowed.
>
> Thanks,
> Harsha
>
> On Wed, Feb 3, 2016, at 04:58 AM, Rajini Sivaram wrote:
> > A slightly different approach for supporting different SASL mechanisms
> > within a broker is to allow the same "*security protocol*" to be used on
> > different ports with different configuration options. An advantage of
> > this
> > approach is that it extends the configurability of not just SASL, but any
> > protocol. For instance, it would enable the use of SSL with mutual client
> > authentication on one port or different certificate chains on another.
> > And
> > it avoids the need for SASL mechanism negotiation.
> >
> > Kafka would have the same "*security protocols" *defined as today, but
> > with
> > (a single) configurable SASL mechanism. To have different configurations
> > of
> > a protocol within a broker, users can define new protocol names which are
> > configured versions of existing protocols, perhaps using just
> > configuration
> > entries and no additional code.
> >
> > For example:
> >
> > A single mechanism broker would be configured as:
> >
> > listeners=SASL_SSL://:9092
> > sasl.mechanism=GSSAPI
> > sasl.kerberos.class.name=kafka
> > ...
> >
> >
> > And a multi-mechanism broker would be configured as:
> >
> > listeners=gssapi://:9092,plain://:9093,custom://:9094
> > gssapi.security.protocol=SASL_SSL
> > gssapi.sasl.mechanism=GSSAPI
> > gssapi.sasl.kerberos.class.name=kafka
> > ...
> > plain.security.protocol=SASL_SSL
> > plain.sasl.mechanism=PLAIN
> > ..
> > custom.security.protocol=SASL_PLAINTEXT
> > custom.sasl.mechanism=CUSTOM
> > custom.sasl.callback.handler.class=example.CustomCallbackHandler
> >
> >
> >
> > This is still a big change because it affects the currently fixed
> > enumeration of security protocol definitions, but one that is perhaps
> > more
> > flexible than defining every new SASL mechanism as a new security
> > protocol.
> >
> > Thoughts?
> >
> >
> > On Tue, Feb 2, 2016 at 12:20 PM, Rajini Sivaram <
> > rajinisiva...@googlemail.com> wrote:
> >
> > > As Ismael has said, we do not have a requirement to support multiple
> > > protocols in a broker. But I agree with Jun's observation that some
> > > companies might want to support a different authentication mechanism
> for
> > > internal users or partners. For instance, we do use two different
> > > authentication mechanisms, it just so happens that we are able to use
> > > certificate-based authentication for internal users, and hence don't
> > > require multiple SASL mechanisms in a broker.
> > >
> > > As Tao has pointed out, mechanism negotiation is a common usage
> pattern.
> > > Many existing protocols that support SASL do already use this pattern.
> AMQP
> > > (
> > >
> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms
> ),
> > > which, as a messaging protocol maybe closer to Kafka in use cases than
> > > Zookeeper, is an example. Other examples where the client negotiates or
> > > sends SASL mechanism to server include ACAP that is used as an example
> in
> > > the SASL RFCs, POP3, LDAP, SMTP etc. This is not to say that Kafka
> > > shouldn't use a different type of mechanism selection that fits better
> with
> > > the existing Kafka design. Just that negotiation is a common pattern
> and
> > > since we typically turn on javax.net.debug to debug TLS negotiation
> issues,
> > > having to use Kafka logging to debug SASL negotiation issues is not
> that
> > > dissimilar.
> > >
> > >
> > > On Tue, Feb 2, 2016 at 6:12 AM, tao xiao  wrote:
> > >
> > >> I am the author of KIP-44. I hope my use case will add some values to
> this
> > >> discussion. The reason I raised KIP44 is that I want to be able to
> > >> implement a custom security protocol that can fulfill the need of my
> > >> company. As pointed out by Ismael KIP-43 now supports a pluggable way
> to
> > >> inject custom security provider to SASL I think it is enough to cover
> the
> > >> use case I have and address the concerns raised in KIP-44.
> > >>
> > >> For multiple security protocols support simultaneously it is not
> needed in
> > >> my use ca

Re: Kafka cluster performance degradation (Kafka 0.8.2.1)

2016-02-03 Thread Cliff Rhyne
Hi Leo,

We also have a fairly idle CPU for our brokers, but we aren't tracking
latency like you are.  We are mostly interested in overall throughput and
storage.  We are using compression and bundling which add to latency but
decrease storage usage.

Good luck,
Cliff

On Wed, Feb 3, 2016 at 10:55 AM, Clelio De Souza  wrote:

> Hi Cliff,
>
> Thanks for getting back to me on this. Sorry for the delay on replying to
> you, but I was away on holiday.
>
> I have been monitoring the Kafka brokers via JMX and on all nodes CPU usage
> was pretty much idle, under 1% as you said. I should have mentioned before
> that the Kafka cluster was not under heavy load at all. I have reserved a
> TEST Kafka cluster to benchmark the latency in our application. Since my
> last measurements, the latency on this cluster that I have setup on
> 18/01/2016 has increased a bit to ~ 25ms to 33ms (measurement taken today
> 03/02/2016), which indeed indicates the latency will incrementally
> downgrade over time.
>
> We still haven't found out what may be causing the degradation of
> performance (i.e. latency). We are not using compression at all for our
> messages, but we decided to perform the latency tests against Kafka
> 0.9.0.0. I have bounce the whole cluster and started with Kafka 0.9.0.0 and
> initial latency tests have shown the latency has been kept the same around
> 25ms to 33ms. It will be interesting to monitor whether the latency will
> start degrading over time (in 1 week's time for instance). Hopefully, with
> Kafka 0.9.0.0 that won't happen.
>
> If you have other ideas what may be causing that, please, let me know. I
> appreciate it.
>
> Cheers,
> Leo
>
> On 21 January 2016 at 16:16, Cliff Rhyne  wrote:
>
> > Hi Leo,
> >
> > I'm not sure if this is the issue you're encountering, but this is what
> we
> > found when we went from 0.8.1.1 to 0.8.2.1.
> >
> > Snappy compression didn't work as expected.  Something in the library
> broke
> > compressing bundles of messages and each message was compressed
> > individually (which for us caused a lot of overhead).  Disk usage went
> way
> > up and CPU usage went incrementally up (still under 1%).  I didn't
> monitor
> > latency, it was well within the tolerances of our system.  We resolved
> this
> > issue by switching our compression to gzip.
> >
> > This issue is supposedly fixed in 0.9.0.0 but we haven't verified it yet.
> >
> > Cliff
> >
> > On Thu, Jan 21, 2016 at 4:04 AM, Clelio De Souza 
> > wrote:
> >
> > > Hi all,
> > >
> > >
> > > We are using Kafka in production and we have been facing some
> performance
> > > degradation of the cluster, apparently after the cluster is a bit
> "old".
> > >
> > >
> > > We have our production cluster which is up and running since 31/12/2015
> > and
> > > performance tests on our application measuring a full round trip of TCP
> > > packets and Kafka producing/consumption of data (3 hops in total for
> > every
> > > single TCP packet being sent, persisted and consumed in the other end).
> > The
> > > results for the production cluster shows a latency of ~ 130ms to 200ms.
> > >
> > >
> > > In our Test environment we have the very same software and
> specification
> > in
> > > AWS instances, i.e. Test environment as being a mirror of Prod. The
> Kafka
> > > cluster has been running in Test since 18/12/2015 and the same
> > performance
> > > tests (as described above) shows a increase of latency to ~ 800ms to
> > > 1000ms.
> > >
> > >
> > > We have just recently setup a new fresh Kafka cluster (on 18/01/2016)
> > > trying to get to the bottom of this performance degradation problem and
> > in
> > > the new Kafka cluster deployed in Test in replacement of the original
> > Test
> > > Kafka cluster we found a very small latency of ~ 10ms to 15ms.
> > >
> > >
> > > We are using Kafka 0.8.2.1 version for all those environment mentioned
> > > above. And the same cluster configuration has been setup on all of
> them,
> > as
> > > 3 brokers as m3.xlarge AWS instances. The amount of data and Kafka
> topics
> > > are roughly the same among those environments, therefore the
> performance
> > > degradation seems to be not directly related to the amount of data in
> the
> > > cluster. We suspect that something running inside of the Kafka cluster,
> > > such as repartitioning or log rentention (even though our topics are to
> > > setup to last for ~ 2 years and it has not elapsed this time at all).
> > >
> > >
> > > The Kafka broker config can be found as below. If anyone could shed
> some
> > > lights on what it may be causing the performance degradation for our
> > Kafka
> > > cluster, it would be great and very much appreciate it.
> > >
> > >
> > > Thanks,
> > > Leo
> > >
> > > 
> > >
> > >
> > > # Licensed to the Apache Software Foundation (ASF) under one or more
> > > # contributor license agreements.  See the NOTICE file distributed with
> > > # this work for additional information regarding copyright ownership.
> > > # The ASF l

Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-02-03 Thread Jason Gustafson
Most of the use cases of pause/resume that I've seen work only on single
partitions (e.g in Kafka Streams), so the current varargs method is kind of
nice. It would also be nice to be able to do the following:

consumer.pause(consumer.assignment());

Both variants seem convenient in different situations.

-Jason

On Wed, Feb 3, 2016 at 6:34 AM, Ismael Juma  wrote:

> Hi Becket,
>
> On Wed, Jan 27, 2016 at 10:51 PM, Becket Qin  wrote:
>
> > 2. For seek(), pause(), resume(), it depends on how easily user can use
> > them.
> > If we take current interface, and user have a list of partitions to
> > pause(), what they can do is something like:
> > pause(patitionList.toArray());
> > If we change that to take a collection and user have only one
> partition
> > to pause. They have to do:
> > pause(new List<>(partition));
> > Personally I think the current interface handles both single
> partition
> > and a list of partitions better. It is not ideal that we have to adapt to
> > the interface. I just feel it is weirder to create a new list.
> >
>
> This is not quite right. `toArray` returns an `Object[]`, you would need
> the more verbose:
>
> consumer.pause(partitionList.toArray(new TopicPartition[0]));
>
> And for the other case, the recommended approach would be:
>
> consumer.assign(Collections.singleton(partition));
>
> Or, more concisely (with a static import):
>
> consumer.assign(singletonList(partition));
>
> Do people often call `seek()`, `pause()` and `resume()` with a single
> partition?
>
> Ismael
>


[VOTE] KIP-33 - Add a time based log index to Kafka

2016-02-03 Thread Becket Qin
Hi all,

I would like to initiate the vote for KIP-33.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-33
+-+Add+a+time+based+log+index

A good amount of the KIP has been touched during the discussion on KIP-32.
So I also put the link to KIP-32 here for reference.
https://cwiki.apache.org/confluence/display/KAFKA/KIP
-32+-+Add+timestamps+to+Kafka+message

Thanks,

Jiangjie (Becket) Qin


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130806#comment-15130806
 ] 

Eno Thereska commented on KAFKA-3197:
-

[~jjkoshy], [~becket_qin] Makes sense. I can't help but think of the analogy to 
file systems. The only way to guarantee order is to do synchronous requests one 
at a time. Async requests can never guarantee order. I believe the current 
solution you are providing would work, but I wonder if it's worth taking a step 
back and simplifying the options (perhaps to just two: async ---with any number 
of requests outstanding --- and sync).

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread Adam Kunicki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Kunicki updated KAFKA-3199:

Issue Type: Improvement  (was: Bug)

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-03 Thread Adam Kunicki (JIRA)
Adam Kunicki created KAFKA-3199:
---

 Summary: LoginManager should allow using an existing Subject
 Key: KAFKA-3199
 URL: https://issues.apache.org/jira/browse/KAFKA-3199
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 0.9.0.0
Reporter: Adam Kunicki
Assignee: Adam Kunicki
Priority: Critical


LoginManager currently creates a new Login in the constructor which then 
performs a login and starts a ticket renewal thread. The problem here is that 
because Kafka performs its own login, it doesn't offer the ability to re-use an 
existing subject that's already managed by the client application.

The goal of LoginManager appears to be to be able to return a valid Subject. It 
would be a simple fix to have LoginManager.acquireLoginManager() check for a 
new config e.g. kerberos.use.existing.subject. 

This would instead of creating a new Login in the constructor simply call 
Subject.getSubject(AccessController.getContext()); to use the already logged in 
Subject.

This is also doable without introducing a new configuration and simply checking 
if there is already a valid Subject available, but I think it may be preferable 
to require that users explicitly request this behavior.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3198) Ticket Renewal Thread exits prematurely due to inverted comparison

2016-02-03 Thread Adam Kunicki (JIRA)
Adam Kunicki created KAFKA-3198:
---

 Summary: Ticket Renewal Thread exits prematurely due to inverted 
comparison
 Key: KAFKA-3198
 URL: https://issues.apache.org/jira/browse/KAFKA-3198
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 0.9.0.0
Reporter: Adam Kunicki
Assignee: Adam Kunicki
Priority: Critical


Line 152 of Login.java:
{code}
if (isUsingTicketCache && tgt.getRenewTill() != null && 
tgt.getRenewTill().getTime() >= expiry) {
{code}

This line is used to determine whether to exit the thread and issue an error to 
the user.

The >= should be < since we are actually able to renew if the renewTill time is 
later than the current ticket expiration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130785#comment-15130785
 ] 

Joel Koshy commented on KAFKA-3197:
---

Sorry - hit enter too soon. "in that it does not specifically say that it can 
be used to prevent reordering within a partition"

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130781#comment-15130781
 ] 

Joel Koshy commented on KAFKA-3197:
---

[~enothereska] - the documentation is accurate in that it . The reality though 
is that everyone (or at least most users) interpret that to mean it is possible 
to achieve strict ordering within a partition which is necessary for several 
use-cases.


> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130761#comment-15130761
 ] 

Jiangjie Qin commented on KAFKA-3197:
-

[~enothereska] The documentation of max.in.flight.request.per.connection did 
not say it explicitly, but I think followings are the guarantees we currently 
claim (or think) we are providing with different 
in.flight.request.per.connection and retries.

1. retries = 0, regardless of in.flight.request.per.connection
Producer itself does not introduce reordering in this case (all messages are 
only sent once), but message send will very likely fail immediately when event 
such as leader migration occurs. Users probably only have three choices when 
message failure occurs: a) let it go so the message is dropped; b) close 
producer if user do not tolerate message loss or re-ordering; c) resend the 
message and have re-ordering (this re-ordering is introduced by user)

2. in.flight.request.per.connection >1 and retries > 0 (some reasonable number)
No worry about frequent message send failure, but re-order could happen when 
there is retry.

3. in.flight.request.per.connection = 1 and retries > 0
No re-ordering and no frequent failure.

The bug here breaks the 3rd guarantee which we thought we are providing.


> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka cluster performance degradation (Kafka 0.8.2.1)

2016-02-03 Thread Clelio De Souza
Hi Cliff,

Thanks for getting back to me on this. Sorry for the delay on replying to
you, but I was away on holiday.

I have been monitoring the Kafka brokers via JMX and on all nodes CPU usage
was pretty much idle, under 1% as you said. I should have mentioned before
that the Kafka cluster was not under heavy load at all. I have reserved a
TEST Kafka cluster to benchmark the latency in our application. Since my
last measurements, the latency on this cluster that I have setup on
18/01/2016 has increased a bit to ~ 25ms to 33ms (measurement taken today
03/02/2016), which indeed indicates the latency will incrementally
downgrade over time.

We still haven't found out what may be causing the degradation of
performance (i.e. latency). We are not using compression at all for our
messages, but we decided to perform the latency tests against Kafka
0.9.0.0. I have bounce the whole cluster and started with Kafka 0.9.0.0 and
initial latency tests have shown the latency has been kept the same around
25ms to 33ms. It will be interesting to monitor whether the latency will
start degrading over time (in 1 week's time for instance). Hopefully, with
Kafka 0.9.0.0 that won't happen.

If you have other ideas what may be causing that, please, let me know. I
appreciate it.

Cheers,
Leo

On 21 January 2016 at 16:16, Cliff Rhyne  wrote:

> Hi Leo,
>
> I'm not sure if this is the issue you're encountering, but this is what we
> found when we went from 0.8.1.1 to 0.8.2.1.
>
> Snappy compression didn't work as expected.  Something in the library broke
> compressing bundles of messages and each message was compressed
> individually (which for us caused a lot of overhead).  Disk usage went way
> up and CPU usage went incrementally up (still under 1%).  I didn't monitor
> latency, it was well within the tolerances of our system.  We resolved this
> issue by switching our compression to gzip.
>
> This issue is supposedly fixed in 0.9.0.0 but we haven't verified it yet.
>
> Cliff
>
> On Thu, Jan 21, 2016 at 4:04 AM, Clelio De Souza 
> wrote:
>
> > Hi all,
> >
> >
> > We are using Kafka in production and we have been facing some performance
> > degradation of the cluster, apparently after the cluster is a bit "old".
> >
> >
> > We have our production cluster which is up and running since 31/12/2015
> and
> > performance tests on our application measuring a full round trip of TCP
> > packets and Kafka producing/consumption of data (3 hops in total for
> every
> > single TCP packet being sent, persisted and consumed in the other end).
> The
> > results for the production cluster shows a latency of ~ 130ms to 200ms.
> >
> >
> > In our Test environment we have the very same software and specification
> in
> > AWS instances, i.e. Test environment as being a mirror of Prod. The Kafka
> > cluster has been running in Test since 18/12/2015 and the same
> performance
> > tests (as described above) shows a increase of latency to ~ 800ms to
> > 1000ms.
> >
> >
> > We have just recently setup a new fresh Kafka cluster (on 18/01/2016)
> > trying to get to the bottom of this performance degradation problem and
> in
> > the new Kafka cluster deployed in Test in replacement of the original
> Test
> > Kafka cluster we found a very small latency of ~ 10ms to 15ms.
> >
> >
> > We are using Kafka 0.8.2.1 version for all those environment mentioned
> > above. And the same cluster configuration has been setup on all of them,
> as
> > 3 brokers as m3.xlarge AWS instances. The amount of data and Kafka topics
> > are roughly the same among those environments, therefore the performance
> > degradation seems to be not directly related to the amount of data in the
> > cluster. We suspect that something running inside of the Kafka cluster,
> > such as repartitioning or log rentention (even though our topics are to
> > setup to last for ~ 2 years and it has not elapsed this time at all).
> >
> >
> > The Kafka broker config can be found as below. If anyone could shed some
> > lights on what it may be causing the performance degradation for our
> Kafka
> > cluster, it would be great and very much appreciate it.
> >
> >
> > Thanks,
> > Leo
> >
> > 
> >
> >
> > # Licensed to the Apache Software Foundation (ASF) under one or more
> > # contributor license agreements.  See the NOTICE file distributed with
> > # this work for additional information regarding copyright ownership.
> > # The ASF licenses this file to You under the Apache License, Version 2.0
> > # (the "License"); you may not use this file except in compliance with
> > # the License.  You may obtain a copy of the License at
> > #
> > #http://www.apache.org/licenses/LICENSE-2.0
> > #
> > # Unless required by applicable law or agreed to in writing, software
> > # distributed under the License is distributed on an "AS IS" BASIS,
> > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> implied.
> > # See the License for the specific language governing permissions and
> > # li

[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130616#comment-15130616
 ] 

Grant Henke commented on KAFKA-3088:


[~ijuma] Sounds good. Thanks for the confirmation. I will make a patch for 
review with those changes.

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130609#comment-15130609
 ] 

Ismael Juma commented on KAFKA-3088:


This may be a candidate for 0.9.0.1.

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-03 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3088:
---
Fix Version/s: 0.9.0.1

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
> Fix For: 0.9.0.1
>
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130601#comment-15130601
 ] 

Ismael Juma commented on KAFKA-3088:


[~granthenke], your suggestion seems to make sense given the favoured approach 
by the commenters in this JIRA (which is to restore compatibility).

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-02-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130571#comment-15130571
 ] 

Grant Henke commented on KAFKA-3088:


I have written a basic unit test that sends a produce request with an null 
client-id in order to start fixing this issue, but I have found that the 
behavior is different between 0.8.2, 0.9.0, and trunk. The error messages and 
explanations are below:

*0.8.2:* No issue.
*0.9.0:* Parses the message fine, but the resulting null causes issues:
{code}
[2016-02-02 22:46:25,230] ERROR [KafkaApi-1] error when handling request Name: 
ProducerRequest; Version: 1; CorrelationId: -1; ClientId: null; RequiredAcks: 
-1; AckTimeoutMs: 1 ms; TopicAndPartition:  (kafka.server.KafkaApis:103)
java.lang.NullPointerException
at 
org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
at 
org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
at 
org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
at 
org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
at 
kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
at 
kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:354)
at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:360)
at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
{code}
*trunk:* Uses the new o.a.k requests since KAFKA-2071. Null is not allowed in 
client_id, therefore does not parse:
{code}
[2016-02-02 22:53:25,745] ERROR Closing socket for 
127.0.0.1:57399-127.0.0.1:57400 because of error (kafka.network.Processor:103)
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'client_id': java.lang.NegativeArraySizeException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
at 
org.apache.kafka.common.requests.RequestHeader.parse(RequestHeader.java:80)
at kafka.network.RequestChannel$Request.(RequestChannel.scala:82)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.run(SocketServer.scala:421)
at java.lang.Thread.run(Thread.java:745)
{code}

Based on those findings I want to open the question: 
*In trunk do we want to change the o.a.k protocol to support a nullable string 
and allow client_id to be nullable?* It looks like there was clear intention to 
have it required based on how the protocol is defined. This is simple to do and 
it was recently done for byte arrays in KAFKA-2695. The answer for how to 
handle trunk should help drive the solution for 0.9.0.

*Note:* I will also ad tests for other "malformed' requests as part of this 
fix/patch.

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache

Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-02-03 Thread Ismael Juma
Hi Becket,

On Wed, Jan 27, 2016 at 10:51 PM, Becket Qin  wrote:

> 2. For seek(), pause(), resume(), it depends on how easily user can use
> them.
> If we take current interface, and user have a list of partitions to
> pause(), what they can do is something like:
> pause(patitionList.toArray());
> If we change that to take a collection and user have only one partition
> to pause. They have to do:
> pause(new List<>(partition));
> Personally I think the current interface handles both single partition
> and a list of partitions better. It is not ideal that we have to adapt to
> the interface. I just feel it is weirder to create a new list.
>

This is not quite right. `toArray` returns an `Object[]`, you would need
the more verbose:

consumer.pause(partitionList.toArray(new TopicPartition[0]));

And for the other case, the recommended approach would be:

consumer.assign(Collections.singleton(partition));

Or, more concisely (with a static import):

consumer.assign(singletonList(partition));

Do people often call `seek()`, `pause()` and `resume()` with a single
partition?

Ismael


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-03 Thread Harsha
Rajini,
   I looked at the PR you have. I think its better with your
   earlier approach rather than extending the protocol.
What I was thinking initially is, Broker has a config option of say
sasl.mechanism = GSSAPI, PLAIN
and the client can have similar config of sasl.mechanism=PLAIN. Client
can send its sasl mechanism before the handshake starts and if the
broker accepts that particular mechanism than it can go ahead with
handshake otherwise return a error saying that the mechanism not
allowed. 

Thanks,
Harsha

On Wed, Feb 3, 2016, at 04:58 AM, Rajini Sivaram wrote:
> A slightly different approach for supporting different SASL mechanisms
> within a broker is to allow the same "*security protocol*" to be used on
> different ports with different configuration options. An advantage of
> this
> approach is that it extends the configurability of not just SASL, but any
> protocol. For instance, it would enable the use of SSL with mutual client
> authentication on one port or different certificate chains on another.
> And
> it avoids the need for SASL mechanism negotiation.
> 
> Kafka would have the same "*security protocols" *defined as today, but
> with
> (a single) configurable SASL mechanism. To have different configurations
> of
> a protocol within a broker, users can define new protocol names which are
> configured versions of existing protocols, perhaps using just
> configuration
> entries and no additional code.
> 
> For example:
> 
> A single mechanism broker would be configured as:
> 
> listeners=SASL_SSL://:9092
> sasl.mechanism=GSSAPI
> sasl.kerberos.class.name=kafka
> ...
> 
> 
> And a multi-mechanism broker would be configured as:
> 
> listeners=gssapi://:9092,plain://:9093,custom://:9094
> gssapi.security.protocol=SASL_SSL
> gssapi.sasl.mechanism=GSSAPI
> gssapi.sasl.kerberos.class.name=kafka
> ...
> plain.security.protocol=SASL_SSL
> plain.sasl.mechanism=PLAIN
> ..
> custom.security.protocol=SASL_PLAINTEXT
> custom.sasl.mechanism=CUSTOM
> custom.sasl.callback.handler.class=example.CustomCallbackHandler
> 
> 
> 
> This is still a big change because it affects the currently fixed
> enumeration of security protocol definitions, but one that is perhaps
> more
> flexible than defining every new SASL mechanism as a new security
> protocol.
> 
> Thoughts?
> 
> 
> On Tue, Feb 2, 2016 at 12:20 PM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
> 
> > As Ismael has said, we do not have a requirement to support multiple
> > protocols in a broker. But I agree with Jun's observation that some
> > companies might want to support a different authentication mechanism for
> > internal users or partners. For instance, we do use two different
> > authentication mechanisms, it just so happens that we are able to use
> > certificate-based authentication for internal users, and hence don't
> > require multiple SASL mechanisms in a broker.
> >
> > As Tao has pointed out, mechanism negotiation is a common usage pattern.
> > Many existing protocols that support SASL do already use this pattern. AMQP
> > (
> > http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms),
> > which, as a messaging protocol maybe closer to Kafka in use cases than
> > Zookeeper, is an example. Other examples where the client negotiates or
> > sends SASL mechanism to server include ACAP that is used as an example in
> > the SASL RFCs, POP3, LDAP, SMTP etc. This is not to say that Kafka
> > shouldn't use a different type of mechanism selection that fits better with
> > the existing Kafka design. Just that negotiation is a common pattern and
> > since we typically turn on javax.net.debug to debug TLS negotiation issues,
> > having to use Kafka logging to debug SASL negotiation issues is not that
> > dissimilar.
> >
> >
> > On Tue, Feb 2, 2016 at 6:12 AM, tao xiao  wrote:
> >
> >> I am the author of KIP-44. I hope my use case will add some values to this
> >> discussion. The reason I raised KIP44 is that I want to be able to
> >> implement a custom security protocol that can fulfill the need of my
> >> company. As pointed out by Ismael KIP-43 now supports a pluggable way to
> >> inject custom security provider to SASL I think it is enough to cover the
> >> use case I have and address the concerns raised in KIP-44.
> >>
> >> For multiple security protocols support simultaneously it is not needed in
> >> my use case and I don't foresee it is needed in the future but as i said
> >> this is my use case only there may be other use cases that need it. But if
> >> we want to support it in the future I prefer to get it right at the first
> >> place given the fact that security protocol is an ENUM and if we stick to
> >> that implementation it is very hard to extend in the future when we decide
> >> multiple security protocols is needed.
> >>
> >> Protocol negotiation is a very common usage pattern in security domain. As
> >> suggested in Java SASL doc
> >>
> >> http://docs.oracle.com/java

Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-03 Thread Rajini Sivaram
A slightly different approach for supporting different SASL mechanisms
within a broker is to allow the same "*security protocol*" to be used on
different ports with different configuration options. An advantage of this
approach is that it extends the configurability of not just SASL, but any
protocol. For instance, it would enable the use of SSL with mutual client
authentication on one port or different certificate chains on another. And
it avoids the need for SASL mechanism negotiation.

Kafka would have the same "*security protocols" *defined as today, but with
(a single) configurable SASL mechanism. To have different configurations of
a protocol within a broker, users can define new protocol names which are
configured versions of existing protocols, perhaps using just configuration
entries and no additional code.

For example:

A single mechanism broker would be configured as:

listeners=SASL_SSL://:9092
sasl.mechanism=GSSAPI
sasl.kerberos.class.name=kafka
...


And a multi-mechanism broker would be configured as:

listeners=gssapi://:9092,plain://:9093,custom://:9094
gssapi.security.protocol=SASL_SSL
gssapi.sasl.mechanism=GSSAPI
gssapi.sasl.kerberos.class.name=kafka
...
plain.security.protocol=SASL_SSL
plain.sasl.mechanism=PLAIN
..
custom.security.protocol=SASL_PLAINTEXT
custom.sasl.mechanism=CUSTOM
custom.sasl.callback.handler.class=example.CustomCallbackHandler



This is still a big change because it affects the currently fixed
enumeration of security protocol definitions, but one that is perhaps more
flexible than defining every new SASL mechanism as a new security protocol.

Thoughts?


On Tue, Feb 2, 2016 at 12:20 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> As Ismael has said, we do not have a requirement to support multiple
> protocols in a broker. But I agree with Jun's observation that some
> companies might want to support a different authentication mechanism for
> internal users or partners. For instance, we do use two different
> authentication mechanisms, it just so happens that we are able to use
> certificate-based authentication for internal users, and hence don't
> require multiple SASL mechanisms in a broker.
>
> As Tao has pointed out, mechanism negotiation is a common usage pattern.
> Many existing protocols that support SASL do already use this pattern. AMQP
> (
> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms),
> which, as a messaging protocol maybe closer to Kafka in use cases than
> Zookeeper, is an example. Other examples where the client negotiates or
> sends SASL mechanism to server include ACAP that is used as an example in
> the SASL RFCs, POP3, LDAP, SMTP etc. This is not to say that Kafka
> shouldn't use a different type of mechanism selection that fits better with
> the existing Kafka design. Just that negotiation is a common pattern and
> since we typically turn on javax.net.debug to debug TLS negotiation issues,
> having to use Kafka logging to debug SASL negotiation issues is not that
> dissimilar.
>
>
> On Tue, Feb 2, 2016 at 6:12 AM, tao xiao  wrote:
>
>> I am the author of KIP-44. I hope my use case will add some values to this
>> discussion. The reason I raised KIP44 is that I want to be able to
>> implement a custom security protocol that can fulfill the need of my
>> company. As pointed out by Ismael KIP-43 now supports a pluggable way to
>> inject custom security provider to SASL I think it is enough to cover the
>> use case I have and address the concerns raised in KIP-44.
>>
>> For multiple security protocols support simultaneously it is not needed in
>> my use case and I don't foresee it is needed in the future but as i said
>> this is my use case only there may be other use cases that need it. But if
>> we want to support it in the future I prefer to get it right at the first
>> place given the fact that security protocol is an ENUM and if we stick to
>> that implementation it is very hard to extend in the future when we decide
>> multiple security protocols is needed.
>>
>> Protocol negotiation is a very common usage pattern in security domain. As
>> suggested in Java SASL doc
>>
>> http://docs.oracle.com/javase/7/docs/technotes/guides/security/sasl/sasl-refguide.html
>> client
>> first sends out a packet to server and server responds with a list of
>> mechanisms it supports. This is very similar to SSL/TLS negotiation.
>>
>> On Tue, 2 Feb 2016 at 06:39 Ismael Juma  wrote:
>>
>> > On Mon, Feb 1, 2016 at 7:04 PM, Gwen Shapira  wrote:
>> >
>> > > Looking at "existing solutions", it looks like Zookeeper allows
>> plugging
>> > in
>> > > any SASL mechanism, but the server will only support one mechanism at
>> a
>> > > time.
>> > >
>> >
>> > This was the original proposal from Rajini as that is enough for their
>> > needs.
>> >
>> >
>> > > If this is good enough for our use-case (do we actually need to
>> support
>> > > multiple mechanisms at once?), it will simplify life a lot for us (
>> > >

[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130306#comment-15130306
 ] 

Eno Thereska commented on KAFKA-3197:
-

[~becket_qin]: I don't think the documentation for max.in.flight.requests ever 
promised to send a message in order if in flight requests is set to 1. Reqs can 
be sent out of order if there are retries. Could the order goal be achieved 
without adding another parameter to the (already long) config file but by using 
a combination of in flight requests=1 and (retries=0 or acks=1)? Thanks.

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3197:

Status: Patch Available  (was: Open)

> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3197) Producer can send message out of order even when in flight request is set to 1.

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130084#comment-15130084
 ] 

ASF GitHub Bot commented on KAFKA-3197:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/857

KAFKA-3197 Fix producer sending records out of order

This patch adds a new configuration to the producer to enforce the maximum 
in flight batch for each partition. The patch itself is small, but this is an 
interface change. Given this is a pretty important fix, may be we can run a 
quick KIP on it. 

This patch did not remove max.in.flight.request.per.connection 
configuration because it might still have some value to throttle the number of 
requests sent to a broker. This is primarily for broker's interest.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3197

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #857


commit c12c1e2044fe92954e0c8a27f63263f2020ddd3c
Author: Jiangjie Qin 
Date:   2016-02-03T06:51:41Z

KAFKA-3197 Fix producer sending records out of order




> Producer can send message out of order even when in flight request is set to 
> 1.
> ---
>
> Key: KAFKA-3197
> URL: https://issues.apache.org/jira/browse/KAFKA-3197
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The issue we saw is following:
> 1. Producer send message 0 to topic-partition-0 on broker A. The in-flight 
> request to broker A is 1.
> 2. The request is somehow lost
> 3. Producer refreshed its topic metadata and found leader of 
> topic-partition-0 migrated from broker A to broker B.
> 4. Because there is no in-flight request to broker B. All the subsequent 
> messages to topic-partition-0 in the record accumulator are sent to broker B.
> 5. Later on when the request in step (1) times out, message 0 will be retried 
> and sent to broker B. At this point, all the later messages has already been 
> sent, so we have re-order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3197 Fix producer sending records out of...

2016-02-03 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/857

KAFKA-3197 Fix producer sending records out of order

This patch adds a new configuration to the producer to enforce the maximum 
in flight batch for each partition. The patch itself is small, but this is an 
interface change. Given this is a pretty important fix, may be we can run a 
quick KIP on it. 

This patch did not remove max.in.flight.request.per.connection 
configuration because it might still have some value to throttle the number of 
requests sent to a broker. This is primarily for broker's interest.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3197

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #857


commit c12c1e2044fe92954e0c8a27f63263f2020ddd3c
Author: Jiangjie Qin 
Date:   2016-02-03T06:51:41Z

KAFKA-3197 Fix producer sending records out of order




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130010#comment-15130010
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/693


> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2016-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130011#comment-15130011
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/693

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution

hi @ijuma I reopened this pr again (sorry for my inexperience using github);
I think I did much deduplication for the script;
Please have a look when you have time  : - )

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/693.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #693


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit ffedf6fd04280e89978531fd73e7fe37a4d9bbed
Author: jinxing 
Date:   2015-12-18T07:24:14Z

KAFKA-2875 Class path contains multiple SLF4J bindings warnings when using 
scripts under bin

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit d56be0b9e0849660c07d656c6019f9cc2f17ae55
Author: ZoneMayor 
Date:   2016-01-10T09:24:06Z

Merge pull request #14 from apache/trunk

2016-1-10

commit a937ad38ac90b90a57a1969bdd8ce06d6faaaeb1
Author: jinxing 
Date:   2016-01-10T10:28:18Z

Merge branch 'trunk-KAFKA-2875' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2875

commit 83b2bcca237ba9445360bbfcb05a0de82c36274f
Author: jinxing 
Date:   2016-01-10T12:39:20Z

KAFKA-2875: wip

commit 6e6f2c20c5730253d8e818c2dc1e5e741a05ac08
Author: jinxing 
Date:   2016-01-10T14:53:28Z

KAFKA-2875: Classpath contains multiple SLF4J bindings warnings when using 
scripts under bin

commit fbd380659727d991dff242be33cc6a3bb78f4861
Author: ZoneMayor 
Date:   2016-01-28T06:28:25Z

Merge pull request #15 from apache/trunk

2016-01-28

commit f21aa55ed68907376d5b0924e228875530cc1046
Author: jinxing 
Date:   2016-01-28T07:10:30Z

KAFKA-2875: remove slf4j multi binding warnings when running form source 
distribution (merge to trunk and resolve conflict)

commit 51fcc408302ebb0c4adaf2a4d0e6647cc469c6a0
Author: jinxing 
Date:   2016-01-28T07:43:52Z

added a new line

commit 8a6cbad74ca4f07a4c70c1d522b604d58e4917c6
Author: jinxing 
Date:   2016-02-01T08:49:06Z

KAFKA-2875: create deduplicated dependant-libs and use symlink to construct 
classpath

commit 153a1177c943e76c9c8457c47244ec59ea91d6fc
Author: jinxing 
Date:   2016-02-01T09:42:37Z

small fix

commit 1d283120bd7c3b90928090c4d22376d4ac05c4d5
Author: jinxing 
Date:   2016-02-01T10:09:46Z

KAFKA-2875: modify classpath in windows bat

commit 29c1797ae4f3ba47445e45049c8fc0fc2e1609f4
Author: jinxing 
Date:   2016-02-01T10:13:20Z

mod server.properties for test

commit a1993e5ca2908862340113ce965bd7fdc5020bab
Author: jinxing 
Date:   2016-02-01T12:44:22Z

KAFKA-2875: small fix

commit e523bd2ce91e03e38c20413aef3c48998fc3c263
Author: jinxing 
Date:   2016-02-01T16:12:27Z

KAFKA-2875: small fix

commit fb27bdeba925e6833ca9bc9feb1d6d3cf55c5aaf
Author: jinxing 
Date:   2016-02-02T09:44:32Z

KAFKA-2875: replace PROJECT_NAMES with PROJECT_NAME, use the central 
deduplicated libs if PROJECT_NAME not specified

commit f8ba5a920a0db8654c0776ad8449b167689c0eb4
Author: jinxing 
Date:   2016-02-02T09:48:00Z

small fix




> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala

[GitHub] kafka pull request: KAFKA-2875: remove slf4j multi binding warning...

2016-02-03 Thread ZoneMayor
GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/693

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution

hi @ijuma I reopened this pr again (sorry for my inexperience using github);
I think I did much deduplication for the script;
Please have a look when you have time  : - )

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/693.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #693


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit ffedf6fd04280e89978531fd73e7fe37a4d9bbed
Author: jinxing 
Date:   2015-12-18T07:24:14Z

KAFKA-2875 Class path contains multiple SLF4J bindings warnings when using 
scripts under bin

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit d56be0b9e0849660c07d656c6019f9cc2f17ae55
Author: ZoneMayor 
Date:   2016-01-10T09:24:06Z

Merge pull request #14 from apache/trunk

2016-1-10

commit a937ad38ac90b90a57a1969bdd8ce06d6faaaeb1
Author: jinxing 
Date:   2016-01-10T10:28:18Z

Merge branch 'trunk-KAFKA-2875' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2875

commit 83b2bcca237ba9445360bbfcb05a0de82c36274f
Author: jinxing 
Date:   2016-01-10T12:39:20Z

KAFKA-2875: wip

commit 6e6f2c20c5730253d8e818c2dc1e5e741a05ac08
Author: jinxing 
Date:   2016-01-10T14:53:28Z

KAFKA-2875: Classpath contains multiple SLF4J bindings warnings when using 
scripts under bin

commit fbd380659727d991dff242be33cc6a3bb78f4861
Author: ZoneMayor 
Date:   2016-01-28T06:28:25Z

Merge pull request #15 from apache/trunk

2016-01-28

commit f21aa55ed68907376d5b0924e228875530cc1046
Author: jinxing 
Date:   2016-01-28T07:10:30Z

KAFKA-2875: remove slf4j multi binding warnings when running form source 
distribution (merge to trunk and resolve conflict)

commit 51fcc408302ebb0c4adaf2a4d0e6647cc469c6a0
Author: jinxing 
Date:   2016-01-28T07:43:52Z

added a new line

commit 8a6cbad74ca4f07a4c70c1d522b604d58e4917c6
Author: jinxing 
Date:   2016-02-01T08:49:06Z

KAFKA-2875: create deduplicated dependant-libs and use symlink to construct 
classpath

commit 153a1177c943e76c9c8457c47244ec59ea91d6fc
Author: jinxing 
Date:   2016-02-01T09:42:37Z

small fix

commit 1d283120bd7c3b90928090c4d22376d4ac05c4d5
Author: jinxing 
Date:   2016-02-01T10:09:46Z

KAFKA-2875: modify classpath in windows bat

commit 29c1797ae4f3ba47445e45049c8fc0fc2e1609f4
Author: jinxing 
Date:   2016-02-01T10:13:20Z

mod server.properties for test

commit a1993e5ca2908862340113ce965bd7fdc5020bab
Author: jinxing 
Date:   2016-02-01T12:44:22Z

KAFKA-2875: small fix

commit e523bd2ce91e03e38c20413aef3c48998fc3c263
Author: jinxing 
Date:   2016-02-01T16:12:27Z

KAFKA-2875: small fix

commit fb27bdeba925e6833ca9bc9feb1d6d3cf55c5aaf
Author: jinxing 
Date:   2016-02-02T09:44:32Z

KAFKA-2875: replace PROJECT_NAMES with PROJECT_NAME, use the central 
deduplicated libs if PROJECT_NAME not specified

commit f8ba5a920a0db8654c0776ad8449b167689c0eb4
Author: jinxing 
Date:   2016-02-02T09:48:00Z

small fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2875: remove slf4j multi binding warning...

2016-02-03 Thread ZoneMayor
Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/693


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---