Re: Review Request 28769: Patch for KAFKA-1809

2014-12-05 Thread Joe Stein

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review64163
---



clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java


If we commit this in 0.8.3 once completed it might be good to even have the 
security page link or JIRA in there too with the comment.



clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java


How do we want to document and use this not some enumeration of list int 
instead of list string?



clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java


I think this should be SecurityProtocol.PLAINTEXT with int behind it to use 
instead of String.



core/src/main/scala/kafka/api/ConsumerMetadataRequest.scala


I think SecurityProtocol.PLAINTEXT instead of ProtocolType.PLAINTEXT to be 
consistent.



core/src/main/scala/kafka/cluster/Broker.scala


More JSON, k! We need to change parsers I think or change using JSON 
this is in a few other tickets too.



core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala


ProtocolType to SecurityProtocol refactor



core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala


PLAINTEXT here should be read and matched to it's int from the 
SecurityProtocol statics also.


- Joe Stein


On Dec. 5, 2014, 8:34 p.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> ---
> 
> (Updated Dec. 5, 2014, 8:34 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
> https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> changed topicmetadata to include brokerendpoints and fixed few unit tests
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 525b95e98010cd2053eacd8c321d079bcac2f910 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 32f444ebbd27892275af7a0947b86a6b8317a374 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 72d3ddd0c29bf6c08f9e122c8232bc07612cd448 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 
> b22ca1dce65f665d84c2a980fd82f816e93d9960 
>   clients/src/test/java/org/apache/kafka/clients/NetworkClientTest.java 
> 1a55242e9399fa4669630b55110d530f954e1279 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> 28b12c7b89a56c113b665fbde1b95f873f8624a3 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> 285c0333ff43543d3e46444c1cd9374bb883bb59 
>   core/src/main/scala/kafka/api/ConsumerMetadataRequest.scala 
> 6d00ed090d76cd7925621a9c6db8fb00fb9d48f4 
>   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
> 84f60178f6ebae735c8aa3e79ed93fe21ac4aea7 
>   core/src/main/scala/kafka/api/TopicMetadata.scala 
> 0190076df0adf906ecd332284f222ff974b315fc 
>   core/src/main/scala/kafka/api/TopicMetadataRequest.scala 
> 7dca09ce637a40e125de05703dc42e8b611971ac 
>   core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
> 92ac4e687be22e4800199c0666bfac5e0059e5bb 
>   core/src/main/scala/kafka/client/ClientUtils.scala 
> ebba87f0566684c796c26cb76c64b4640a5ccfde 
>   core/src/main/scala/kafka/cluster/Broker.scala 
> 0060add008bb3bc4b0092f2173c469fce0120be6 
>   core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/ProtocolType.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/BrokerEndPointNotAvailableException.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
> 9ebbee6c16dc83767297c729d2d74ebbd063a993 
>   core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
> b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
>   core/src/main/scala/kafka/consumer/ConsumerFetcherThread.scala 
> ee6139c901082358382c5ef892281386bf6fc91b 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> da29a8cb461099eb675161db2f11a9937424a5c6 
>   core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
> eb492f0044

Re: Build failed in Jenkins: Kafka-trunk #351

2014-12-05 Thread Neha Narkhede
Thanks for checking, Becket!

On Fri, Dec 5, 2014 at 4:55 PM, Jiangjie Qin 
wrote:

> I reran locally all the tests as well as the failed tests, and all the
> tests passed. It seems to be a transient failure.
>
> ‹Jiangjie (Becket) Qin
>
> On 12/5/14, 4:14 PM, "Apache Jenkins Server" 
> wrote:
>
> >See 
> >
> >Changes:
> >
> >[guwang] KAFKA-1650; avoid data loss when mirror maker shutdown
> >uncleanly; reviewed by Guozhang Wang
> >
> >--
> >[...truncated 477 lines...]
> >kafka.utils.SchedulerTest > testPeriodicTask PASSED
> >
> >kafka.utils.JsonTest > testJsonEncoding PASSED
> >
> >kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED
> >
> >kafka.utils.UtilsTest > testSwallow PASSED
> >
> >kafka.utils.UtilsTest > testCircularIterator PASSED
> >
> >kafka.utils.UtilsTest > testReadBytes PASSED
> >
> >kafka.utils.UtilsTest > testAbs PASSED
> >
> >kafka.utils.UtilsTest > testReplaceSuffix PASSED
> >
> >kafka.utils.UtilsTest > testReadInt PASSED
> >
> >kafka.utils.UtilsTest > testCsvList PASSED
> >
> >kafka.utils.UtilsTest > testInLock PASSED
> >
> >kafka.utils.IteratorTemplateTest > testIterator PASSED
> >
> >kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED
> >
> >kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED
> >
> >kafka.log.LogTest > testTimeBasedLogRoll PASSED
> >
> >kafka.log.LogTest > testTimeBasedLogRollJitter PASSED
> >
> >kafka.log.LogTest > testSizeBasedLogRoll PASSED
> >
> >kafka.log.LogTest > testLoadEmptyLog PASSED
> >
> >kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED
> >
> >kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED
> >
> >kafka.log.LogTest > testReadAtLogGap PASSED
> >
> >kafka.log.LogTest > testReadOutOfRange PASSED
> >
> >kafka.log.LogTest > testLogRolls PASSED
> >
> >kafka.log.LogTest > testCompressedMessages PASSED
> >
> >kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset
> >PASSED
> >
> >kafka.log.LogTest > testMessageSetSizeCheck PASSED
> >
> >kafka.log.LogTest > testMessageSizeCheck PASSED
> >
> >kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED
> >
> >kafka.log.LogTest > testIndexRebuild PASSED
> >
> >kafka.log.LogTest > testTruncateTo PASSED
> >
> >kafka.log.LogTest > testIndexResizingAtTruncation PASSED
> >
> >kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED
> >
> >kafka.log.LogTest > testReopenThenTruncate PASSED
> >
> >kafka.log.LogTest > testAsyncDelete PASSED
> >
> >kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED
> >
> >kafka.log.LogTest > testAppendMessageWithNullPayload PASSED
> >
> >kafka.log.LogTest > testCorruptLog PASSED
> >
> >kafka.log.LogTest > testCleanShutdownFile PASSED
> >
> >kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED
> >
> >kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED
> >
> >kafka.log.LogSegmentTest > testMaxOffset PASSED
> >
> >kafka.log.LogSegmentTest > testReadAfterLast PASSED
> >
> >kafka.log.LogSegmentTest > testReadFromGap PASSED
> >
> >kafka.log.LogSegmentTest > testTruncate PASSED
> >
> >kafka.log.LogSegmentTest > testTruncateFull PASSED
> >
> >kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED
> >
> >kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED
> >
> >kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED
> >
> >kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED
> >
> >kafka.log.LogConfigTest > testFromPropsDefaults PASSED
> >
> >kafka.log.LogConfigTest > testFromPropsEmpty PASSED
> >
> >kafka.log.LogConfigTest > testFromPropsToProps PASSED
> >
> >kafka.log.LogConfigTest > testFromPropsInvalid PASSED
> >
> >kafka.log.CleanerTest > testCleanSegments PASSED
> >
> >kafka.log.CleanerTest > testCleaningWithDeletes PASSED
> >
> >kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED
> >
> >kafka.log.CleanerTest > testSegmentGrouping PASSED
> >
> >kafka.log.CleanerTest > testBuildOffsetMap PASSED
> >
> >kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED
> >
> >kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED
> >
> >kafka.log.FileMessageSetTest > testSizeInBytes PASSED
> >
> >kafka.log.FileMessageSetTest > testWriteTo PASSED
> >
> >kafka.log.FileMessageSetTest > testTruncate PASSED
> >
> >kafka.log.FileMessageSetTest > testFileSize PASSED
> >
> >kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation
> >PASSED
> >
> >kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED
> >
> >kafka.log.FileMessageSetTest > testRead PASSED
> >
> >kafka.log.FileMessageSetTest > testSearch PASSED
> >
> >kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED
> >
> >kafka.log.OffsetMapTest > testBasicValidation PASSED
> >
> >kafka.log.OffsetMapTest > testClear PASSED
> >
> >kafka.log.OffsetIndexTest > truncate PASSED
> >
> >kafka.log.OffsetIndexTest > randomLookupTest PASSED
> >
> >kafka.log.OffsetIndexTest > lookupExtremeCases PASSED
> >
> >

Re: Build failed in Jenkins: Kafka-trunk #351

2014-12-05 Thread Jiangjie Qin
I reran locally all the tests as well as the failed tests, and all the
tests passed. It seems to be a transient failure.

‹Jiangjie (Becket) Qin

On 12/5/14, 4:14 PM, "Apache Jenkins Server" 
wrote:

>See 
>
>Changes:
>
>[guwang] KAFKA-1650; avoid data loss when mirror maker shutdown
>uncleanly; reviewed by Guozhang Wang
>
>--
>[...truncated 477 lines...]
>kafka.utils.SchedulerTest > testPeriodicTask PASSED
>
>kafka.utils.JsonTest > testJsonEncoding PASSED
>
>kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED
>
>kafka.utils.UtilsTest > testSwallow PASSED
>
>kafka.utils.UtilsTest > testCircularIterator PASSED
>
>kafka.utils.UtilsTest > testReadBytes PASSED
>
>kafka.utils.UtilsTest > testAbs PASSED
>
>kafka.utils.UtilsTest > testReplaceSuffix PASSED
>
>kafka.utils.UtilsTest > testReadInt PASSED
>
>kafka.utils.UtilsTest > testCsvList PASSED
>
>kafka.utils.UtilsTest > testInLock PASSED
>
>kafka.utils.IteratorTemplateTest > testIterator PASSED
>
>kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED
>
>kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED
>
>kafka.log.LogTest > testTimeBasedLogRoll PASSED
>
>kafka.log.LogTest > testTimeBasedLogRollJitter PASSED
>
>kafka.log.LogTest > testSizeBasedLogRoll PASSED
>
>kafka.log.LogTest > testLoadEmptyLog PASSED
>
>kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED
>
>kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED
>
>kafka.log.LogTest > testReadAtLogGap PASSED
>
>kafka.log.LogTest > testReadOutOfRange PASSED
>
>kafka.log.LogTest > testLogRolls PASSED
>
>kafka.log.LogTest > testCompressedMessages PASSED
>
>kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset
>PASSED
>
>kafka.log.LogTest > testMessageSetSizeCheck PASSED
>
>kafka.log.LogTest > testMessageSizeCheck PASSED
>
>kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED
>
>kafka.log.LogTest > testIndexRebuild PASSED
>
>kafka.log.LogTest > testTruncateTo PASSED
>
>kafka.log.LogTest > testIndexResizingAtTruncation PASSED
>
>kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED
>
>kafka.log.LogTest > testReopenThenTruncate PASSED
>
>kafka.log.LogTest > testAsyncDelete PASSED
>
>kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED
>
>kafka.log.LogTest > testAppendMessageWithNullPayload PASSED
>
>kafka.log.LogTest > testCorruptLog PASSED
>
>kafka.log.LogTest > testCleanShutdownFile PASSED
>
>kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED
>
>kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED
>
>kafka.log.LogSegmentTest > testMaxOffset PASSED
>
>kafka.log.LogSegmentTest > testReadAfterLast PASSED
>
>kafka.log.LogSegmentTest > testReadFromGap PASSED
>
>kafka.log.LogSegmentTest > testTruncate PASSED
>
>kafka.log.LogSegmentTest > testTruncateFull PASSED
>
>kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED
>
>kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED
>
>kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED
>
>kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED
>
>kafka.log.LogConfigTest > testFromPropsDefaults PASSED
>
>kafka.log.LogConfigTest > testFromPropsEmpty PASSED
>
>kafka.log.LogConfigTest > testFromPropsToProps PASSED
>
>kafka.log.LogConfigTest > testFromPropsInvalid PASSED
>
>kafka.log.CleanerTest > testCleanSegments PASSED
>
>kafka.log.CleanerTest > testCleaningWithDeletes PASSED
>
>kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED
>
>kafka.log.CleanerTest > testSegmentGrouping PASSED
>
>kafka.log.CleanerTest > testBuildOffsetMap PASSED
>
>kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED
>
>kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED
>
>kafka.log.FileMessageSetTest > testSizeInBytes PASSED
>
>kafka.log.FileMessageSetTest > testWriteTo PASSED
>
>kafka.log.FileMessageSetTest > testTruncate PASSED
>
>kafka.log.FileMessageSetTest > testFileSize PASSED
>
>kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation
>PASSED
>
>kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED
>
>kafka.log.FileMessageSetTest > testRead PASSED
>
>kafka.log.FileMessageSetTest > testSearch PASSED
>
>kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED
>
>kafka.log.OffsetMapTest > testBasicValidation PASSED
>
>kafka.log.OffsetMapTest > testClear PASSED
>
>kafka.log.OffsetIndexTest > truncate PASSED
>
>kafka.log.OffsetIndexTest > randomLookupTest PASSED
>
>kafka.log.OffsetIndexTest > lookupExtremeCases PASSED
>
>kafka.log.OffsetIndexTest > appendTooMany PASSED
>
>kafka.log.OffsetIndexTest > appendOutOfOrder PASSED
>
>kafka.log.OffsetIndexTest > testReopen PASSED
>
>kafka.log.LogManagerTest > testCreateLog PASSED
>
>kafka.log.LogManagerTest > testGetNonExistentLog PASSED
>
>kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED
>
>kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED
>
>kafka.log.LogMan

Build failed in Jenkins: Kafka-trunk #351

2014-12-05 Thread Apache Jenkins Server
See 

Changes:

[guwang] KAFKA-1650; avoid data loss when mirror maker shutdown uncleanly; 
reviewed by Guozhang Wang

--
[...truncated 477 lines...]
kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWit

[jira] [Commented] (KAFKA-1044) change log4j to slf4j

2014-12-05 Thread Jon Barksdale (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236304#comment-14236304
 ] 

Jon Barksdale commented on KAFKA-1044:
--

More specifically, at least swapping to slf4j for the producer and consumer 
related stuff will allow users to use logging frameworks other than log4j, 
since it currently does not compile without that as a dependency.  

> change log4j to slf4j 
> --
>
> Key: KAFKA-1044
> URL: https://issues.apache.org/jira/browse/KAFKA-1044
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: sjk
>Assignee: Jay Kreps
> Fix For: 0.9.0
>
>
> can u chanage the log4j to slf4j, in my project, i use logback, it's conflict 
> with log4j.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1501) transient unit tests failures due to port already in use

2014-12-05 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236147#comment-14236147
 ] 

Jay Kreps edited comment on KAFKA-1501 at 12/5/14 9:29 PM:
---

Yeah that Stack Overflow article I linked to previously indicates that as of 
Java 7 the only really reliable way to check is to try to create a socket and 
see if that works. That should be a super non-invasive change too--just 
updating the implementation for choosePorts.

http://stackoverflow.com/questions/434718/sockets-discover-port-availability-using-java

They give a checkAvailable method something like the below. So the new approach 
would be to choose a random port in some range, and then check that it is 
available.

{code}
private static boolean checkAvailable(int port) {
Socket s = null;
try {
s = new Socket("localhost", port);
return false;
} catch (IOException e) {
return true;
} finally {
if( s != null){
try {
s.close();
} catch (IOException e) {
throw new RuntimeException("You should handle this error." , e);
}
}
}
}
{code}


was (Author: jkreps):
Yeah that Stack Overflow article I linked to previously indicates that as of 
Java 7 the only really reliable way to check is to try to create a socket and 
see if that works. That should be a super non-invasive change too--just 
updating the implementation for choosePorts.

http://stackoverflow.com/questions/434718/sockets-discover-port-availability-using-java

They give a checkAvailable method something like the below. So the new approach 
would be to choose a random port in some range, and then check that it is 
available.

{code}
private static boolean checkAvailable(int port) {
Socket s = null;
try {
s = new Socket("localhost", port);
return false;
} catch (IOException e) {
System.out.println("--Port " + port + " is available");
return true;
} finally {
if( s != null){
try {
s.close();
} catch (IOException e) {
throw new RuntimeException("You should handle this error." , e);
}
}
}
}
{code}

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
> KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, test-100.out, test-27.out, 
> test-29.out, test-32.out, test-35.out, test-38.out, test-4.out, test-42.out, 
> test-45.out, test-46.out, test-51.out, test-55.out, test-58.out, test-59.out, 
> test-60.out, test-69.out, test-72.out, test-74.out, test-76.out, test-84.out, 
> test-87.out, test-91.out, test-92.out
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2014-12-05 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236147#comment-14236147
 ] 

Jay Kreps commented on KAFKA-1501:
--

Yeah that Stack Overflow article I linked to previously indicates that as of 
Java 7 the only really reliable way to check is to try to create a socket and 
see if that works. That should be a super non-invasive change too--just 
updating the implementation for choosePorts.

http://stackoverflow.com/questions/434718/sockets-discover-port-availability-using-java

They give a checkAvailable method something like the below. So the new approach 
would be to choose a random port in some range, and then check that it is 
available.

{code}
private static boolean checkAvailable(int port) {
Socket s = null;
try {
s = new Socket("localhost", port);
return false;
} catch (IOException e) {
System.out.println("--Port " + port + " is available");
return true;
} finally {
if( s != null){
try {
s.close();
} catch (IOException e) {
throw new RuntimeException("You should handle this error." , e);
}
}
}
}
{code}

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
> KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, test-100.out, test-27.out, 
> test-29.out, test-32.out, test-35.out, test-38.out, test-4.out, test-42.out, 
> test-45.out, test-46.out, test-51.out, test-55.out, test-58.out, test-59.out, 
> test-60.out, test-69.out, test-72.out, test-74.out, test-76.out, test-84.out, 
> test-87.out, test-91.out, test-92.out
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2014-12-05 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236081#comment-14236081
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Also, this is badly lacking unit tests.

I'm planning to add "mock" protocol and add tests (possibly also system tests) 
that actually check that multi-port works as expected. Basically make sure I 
can produce on one port and consume on another.

Suggestions for additional tests will be awesome. 



> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [SECURITY DISCUSSION] Refactoring Brokers to support multiple ports

2014-12-05 Thread Gwen Shapira
Hi,

I posted first draft of the change here:
https://issues.apache.org/jira/browse/KAFKA-1809

Its missing a lot of the feedback from Jun (IPv6, upgrade path, could
use more tests probably).
However, it has most of the structure that I had in mind.

So, comments are more than welcome :)

Gwen


On Tue, Dec 2, 2014 at 4:10 PM, Jun Rao  wrote:
> 1. What would the format of advertised listener looks like? If we have two
> hosts separated by a colon, it may make parsing IP v6 harder.
>
> 3.1 Currently, the only public api that exposes requests/responses is the
> SimpleConsumer. Since most people probably use the high level consumer,
> breaking the api in SimpleConsumer may be ok. Alternatively, we can keep
> Broker as it is, but add sth like BrokerProfile to represent the full
> broker.
>
> 3.2 We haven't made any intra-broker protocol changes yet. The idea is to
> have a config, sth like "use.new.wire.protocol" that defaults to false. In
> phase 1, we do a rolling upgrade of every broker to the new code. In phase
> 2, we set "use.new.wire.protocol" to true and do a rolling bounce of every
> broker again. Yes, we should bump up the protocol version now. As for
> multiple protocol changes within the same release, we can discuss that in
> the mailing list separately. One way that could work is that once a
> protocol change is stable, we can have a discussion in the mailing list and
> declare it stable. If there are new changes after this, we will bump up the
> version. This way, people deploying from trunk will know when it's safe to
> use the new feature.
>
> Thanks,
>
> Jun
>
>
> On Tue, Dec 2, 2014 at 2:56 PM, Gwen Shapira  wrote:
>
>> Thanks you so much for your help here Jun!
>> Highlighting the specific protocols is very useful.
>>
>> See some detailed comments below.
>>
>> On Tue, Dec 2, 2014 at 1:58 PM, Jun Rao  wrote:
>> > Hi, Gwen,
>> >
>> > Thanks for writing up the wiki. Some comments below.
>> >
>> > 1. To make it more general, should we support a binding and an advertised
>> > host for each protocol (e.g. plaintext, ssl, etc)? We will also need to
>> > figure out how to specify the wildcard binding host.
>>
>> Yes, thats the idea. Two lines of config, one with list of listeners
>> (protocol://host:port) and one with list of advertised listeners.
>> Advertised listeners are optional. I think wildcard binding is
>> normally done with 0.0.0.0 host (at least for HDFS), so I was planning
>> to keep that convention.
>>
>> >
>> > 2. Broker format change in ZK
>> > The broker registration in ZK needs to store the host/port for all
>> > protocols. We will need to bump up the version of the broker registration
>> > data. Since this is an intra-cluster protocol change, we need an extra
>> > config for rolling upgrades. So, in the first step, each broker is
>> upgraded
>> > and is ready to parse brokers registered in the new format, but not
>> > registering using the new format yet. In the second step, when that new
>> > config is enabled, the broker will register using the new format.
>> >
>>
>> I'm not sure this is necessary in this case. We'll bump the version for
>> sure.
>> And as long as the new format contains all the fields of the previous
>> formats, the JSON de-serialization should work and just ignore the new
>> fields.
>> So the new brokers can register with the new format right away and the
>> old brokers will be able to read that registration with no issues.
>> New brokers will be able to use old registration but will also know
>> about the extra ports and protocols from the additional field.
>>
>> > 3. Wire protocol changes. Currently, the broker info is used in the
>> > following requests/responses: TopicMetadataResponse ,
>> > ConsumerMetadataResponse, LeaderAndIsrRequest  and UpdateMetadataRequest.
>>
>>
>> > 3.1 TopicMetadataResponse and ConsumerMetadataResponse:
>> > These two are used between the clients and the broker. I am not sure that
>> > we need to make a wire protocol change for them. Currently, the protocol
>> > includes a single host/port pair in those responses. Based on the type of
>> > the port on which the request is sent, it seems that we can just pick the
>> > corresponding host and port to include in the response.
>>
>> The wire protocol will not change here, but the Scala API (i.e method
>> signatures for response and request classes) will change from getting
>> brokers (which no longer represent single host+port pair) to getting
>> endpoints (which do).
>>
>> I assumed the Scala API is public, but perhaps I was wrong there.
>>
>>
>> > 3.2 UpdateMetadataRequest:
>> > This is used between the controller and the broker. Since each broker
>> needs
>> > to cache the host/port of all protocols, we need to make a wire protocol
>> > change. We also need to change the broker format in MetadataCache
>> > accordingly. This is also an intra-cluster protocol change. So the
>> upgrade
>> > path will need to follow that in item 2.
>>
>> Yes. Because the wire protocol is byte-array base

[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2014-12-05 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236072#comment-14236072
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Don't commit this please :)

It builds and unit-tests mostly pass... but there are few big todo items still 
open:
1. Make the protocol backward compatible as we discussed in the mailing list
2. Make tools support multiple protocols (this is hard coded right now)
3. Test with IPv6
4. Support default interface with null (Its currently binding to wildcard by 
default)

Please take a look and comment! Its a pretty large change, so I may have missed 
something big.

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2014-12-05 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1809:

Status: Patch Available  (was: Open)

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2014-12-05 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1809:

Attachment: KAFKA-1809.patch

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2014-12-05 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236066#comment-14236066
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Created reviewboard https://reviews.apache.org/r/28769/diff/
 against branch origin/trunk

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 28769: Patch for KAFKA-1809

2014-12-05 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/
---

Review request for kafka.


Bugs: KAFKA-1809
https://issues.apache.org/jira/browse/KAFKA-1809


Repository: kafka


Description
---

changed topicmetadata to include brokerendpoints and fixed few unit tests


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
525b95e98010cd2053eacd8c321d079bcac2f910 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
32f444ebbd27892275af7a0947b86a6b8317a374 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
72d3ddd0c29bf6c08f9e122c8232bc07612cd448 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
7517b879866fc5dad5f8d8ad30636da8bbe7784a 
  clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 
b22ca1dce65f665d84c2a980fd82f816e93d9960 
  clients/src/test/java/org/apache/kafka/clients/NetworkClientTest.java 
1a55242e9399fa4669630b55110d530f954e1279 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
df37fc6d8f0db0b8192a948426af603be3444da4 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
28b12c7b89a56c113b665fbde1b95f873f8624a3 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
285c0333ff43543d3e46444c1cd9374bb883bb59 
  core/src/main/scala/kafka/api/ConsumerMetadataRequest.scala 
6d00ed090d76cd7925621a9c6db8fb00fb9d48f4 
  core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
84f60178f6ebae735c8aa3e79ed93fe21ac4aea7 
  core/src/main/scala/kafka/api/TopicMetadata.scala 
0190076df0adf906ecd332284f222ff974b315fc 
  core/src/main/scala/kafka/api/TopicMetadataRequest.scala 
7dca09ce637a40e125de05703dc42e8b611971ac 
  core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
92ac4e687be22e4800199c0666bfac5e0059e5bb 
  core/src/main/scala/kafka/client/ClientUtils.scala 
ebba87f0566684c796c26cb76c64b4640a5ccfde 
  core/src/main/scala/kafka/cluster/Broker.scala 
0060add008bb3bc4b0092f2173c469fce0120be6 
  core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/ProtocolType.scala PRE-CREATION 
  core/src/main/scala/kafka/common/BrokerEndPointNotAvailableException.scala 
PRE-CREATION 
  core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
9ebbee6c16dc83767297c729d2d74ebbd063a993 
  core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
  core/src/main/scala/kafka/consumer/ConsumerFetcherThread.scala 
ee6139c901082358382c5ef892281386bf6fc91b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
da29a8cb461099eb675161db2f11a9937424a5c6 
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
eb492f00449744bc8d63f55b393e2a1659d38454 
  core/src/main/scala/kafka/controller/KafkaController.scala 
66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
  core/src/main/scala/kafka/javaapi/ConsumerMetadataResponse.scala 
1b28861cdf7dfb30fc696b54f8f8f05f730f4ece 
  core/src/main/scala/kafka/javaapi/TopicMetadata.scala 
f384e04678df10a5b46a439f475c63371bf8e32b 
  core/src/main/scala/kafka/javaapi/TopicMetadataRequest.scala 
b0b7be14d494ae8c87f4443b52db69d273c20316 
  core/src/main/scala/kafka/network/BlockingChannel.scala 
6e2a38eee8e568f9032f95c75fa5899e9715b433 
  core/src/main/scala/kafka/network/SocketServer.scala 
e451592fe358158548117f47a80e807007dd8b98 
  core/src/main/scala/kafka/producer/ProducerConfig.scala 
3cdf23dce3407f1770b9c6543e3a8ae8ab3ff255 
  core/src/main/scala/kafka/producer/ProducerPool.scala 
43df70bb461dd3e385e6b20396adef3c4016a3fc 
  core/src/main/scala/kafka/server/AbstractFetcherManager.scala 
20c00cb8cc2351950edbc8cb1752905a0c26e79f 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
8c281d4668f92eff95a4a5df3c03c4b5b20e7095 
  core/src/main/scala/kafka/server/KafkaApis.scala 
2a1c0326b6e6966d8b8254bd6a1cb83ad98a3b80 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6e26c5436feb4629d17f199011f3ebb674aa767f 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
4acdd70fe9c1ee78d6510741006c2ece65450671 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
  core/src/main/scala/kafka/server/MetadataCache.scala 
bf81a1ab88c14be8697b441eedbeb28fa0112643 
  core/src/main/scala/kafka/server/ReplicaFetcherManager.scala 
351dbbad3bdb709937943b336a5b0a9e0162a5e2 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
6879e730282185bda3d6bc3659cb15af0672cecf 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
e58fbb922e93b0c31dff04f187fcadb4ece986d7 
  core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala 
d1e7c434e77859d746b8dc68dd5d5a3740425e79 
  core/src/main/scala/kafka/tools/GetOffsetShell.scala 
3d9293

[jira] [Created] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2014-12-05 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-1809:
---

 Summary: Refactor brokers to allow listening on multiple ports and 
IPs 
 Key: KAFKA-1809
 URL: https://issues.apache.org/jira/browse/KAFKA-1809
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Gwen Shapira


The goal is to eventually support different security mechanisms on different 
ports. 
Currently brokers are defined as host+port pair, and this definition exists 
throughout the code-base, therefore some refactoring is needed to support 
multiple ports for a single broker.

The detailed design is here: 
https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2014-12-05 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236040#comment-14236040
 ] 

Ewen Cheslack-Postava commented on KAFKA-1501:
--

They've all been patched. I do notice that it's always failing on after a few 
tests within a suite, there's no particular test triggering it, and it happens 
for both ZK and Kafka nodes. The patch now allocates all the ports needed when 
the test class is instantiated and reuses them across individual tests. So now 
it looks more like a port is getting left in a state that doesn't allow it to 
be freed, i.e. something closer to the problem people originally thought it 
was. Unfortunately allocating ports separately for each test isn't easy because 
of the way the tests are currently structured (building configs during 
construction so setUp() can pull up the servers).

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
> KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, test-100.out, test-27.out, 
> test-29.out, test-32.out, test-35.out, test-38.out, test-4.out, test-42.out, 
> test-45.out, test-46.out, test-51.out, test-55.out, test-58.out, test-59.out, 
> test-60.out, test-69.out, test-72.out, test-74.out, test-76.out, test-84.out, 
> test-87.out, test-91.out, test-92.out
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1501) transient unit tests failures due to port already in use

2014-12-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1501:
-
Attachment: test-4.out
test-29.out
test-32.out
test-35.out
test-38.out
test-42.out
test-45.out
test-46.out
test-51.out
test-55.out
test-58.out
test-59.out
test-60.out
test-69.out
test-72.out
test-74.out
test-76.out
test-84.out
test-87.out
test-91.out
test-92.out
test-100.out
test-27.out
test-100.out

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
> KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, test-100.out, test-27.out, 
> test-29.out, test-32.out, test-35.out, test-38.out, test-4.out, test-42.out, 
> test-45.out, test-46.out, test-51.out, test-55.out, test-58.out, test-59.out, 
> test-60.out, test-69.out, test-72.out, test-74.out, test-76.out, test-84.out, 
> test-87.out, test-91.out, test-92.out
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2014-12-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235962#comment-14235962
 ] 

Guozhang Wang commented on KAFKA-1501:
--

Ewen,

Thanks for the patch. I tested it and there is still some "Address already in 
use" errors shown with 100 runs, but with much less occurrences it seems. Could 
you take a look and see if they were missed in your patch?

I have uploaded the results with those failed cases.

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
> KAFKA-1501.patch, KAFKA-1501.patch
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27693: Patch for KAFKA-1476

2014-12-05 Thread Balaji Seshadri

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27693/
---

(Updated Dec. 5, 2014, 7:03 p.m.)


Review request for kafka.


Bugs: KAFKA-1476
https://issues.apache.org/jira/browse/KAFKA-1476


Repository: kafka


Description
---

KAFKA-1476 Implemented review comments from Neha and Ashish


Diffs (updated)
-

  core/src/main/scala/kafka/tools/ConsumerCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 

Diff: https://reviews.apache.org/r/27693/diff/


Testing
---


Thanks,

Balaji Seshadri



[jira] [Commented] (KAFKA-1476) Get a list of consumer groups

2014-12-05 Thread BalajiSeshadri (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235908#comment-14235908
 ] 

BalajiSeshadri commented on KAFKA-1476:
---

Updated reviewboard https://reviews.apache.org/r/27693/diff/
 against branch origin/trunk

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Balaji Seshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476_2014-11-10_11:58:26.patch, 
> KAFKA-1476_2014-11-10_12:04:01.patch, KAFKA-1476_2014-11-10_12:06:35.patch, 
> KAFKA-1476_2014-12-05_12:00:12.patch
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1476) Get a list of consumer groups

2014-12-05 Thread BalajiSeshadri (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BalajiSeshadri updated KAFKA-1476:
--
Attachment: KAFKA-1476_2014-12-05_12:00:12.patch

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Balaji Seshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476_2014-11-10_11:58:26.patch, 
> KAFKA-1476_2014-11-10_12:04:01.patch, KAFKA-1476_2014-11-10_12:06:35.patch, 
> KAFKA-1476_2014-12-05_12:00:12.patch
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27693: Patch for KAFKA-1476

2014-12-05 Thread Balaji Seshadri

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27693/
---

(Updated Dec. 5, 2014, 7 p.m.)


Review request for kafka.


Bugs: KAFKA-1476
https://issues.apache.org/jira/browse/KAFKA-1476


Repository: kafka


Description (updated)
---

KAFKA-1476 Implemented review comments from Neha and Ashish


Diffs (updated)
-

  core/src/main/scala/kafka/tools/ConsumerCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 

Diff: https://reviews.apache.org/r/27693/diff/


Testing
---


Thanks,

Balaji Seshadri



Jenkins build is back to normal : Kafka-trunk #350

2014-12-05 Thread Apache Jenkins Server
See 



[jira] [Comment Edited] (KAFKA-1808) SimpleConsumer not working as expected

2014-12-05 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235741#comment-14235741
 ] 

Neha Narkhede edited comment on KAFKA-1808 at 12/5/14 6:15 PM:
---

Please look at the "Finding Starting Offset for Reads" section of the example 
to know how to retrieve offsets from the previous run of the consumer. You may 
need to closely troubleshoot how your consumer maintains it's offsets and how 
it retrieves it on restart. You can also look at the Also, these sort of 
questions are best suited for the mailing list. JIRAs are filed once it is 
determined that we may be looking at a bug.


was (Author: nehanarkhede):
Please look at the "Finding Starting Offset for Reads" section of the example 
to know how to retrieve offsets from the previous run of the consumer. You may 
need to close troubleshoot how your consumer maintains it's offsets and how it 
retrieves it on restart. You can also look at the Also, these sort of questions 
are best suited for the mailing list. JIRAs are filed once it is determined 
that we may be looking at a bug.

> SimpleConsumer not working as expected
> --
>
> Key: KAFKA-1808
> URL: https://issues.apache.org/jira/browse/KAFKA-1808
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
> Environment: linux_x64
>Reporter: ajay
>Assignee: Neha Narkhede
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I followed following link
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
> The consumers are consuming messages from a specific partition. But the issue 
> is that when my consumer is running and I push messages to a specif partition 
> of a topic using a producer it consumes messages from that partition. 
> But if my consumer is not running at present and I push some messages to the 
> topic and again start the consumer it do not consume the messages which were 
> pushed by the producer but again it is ready to consume messages which will 
> be pushed now. 
> I am using LatestTime() instead of EarliestTime() as I want to consume only 
> unprocessed messages.
> For example:
> Case -1
> Consumer is running :
> Producer pushed M1, M2, M3 message to partition 1 of topic 1
> result: consumer will consume all three messages.
> Case - 2
> Consumer is not running
> producer now pushes m4, m5 m6 messgae to partition 1 of topic 1
> consumer is invoked now
> result : consumer do not consume messages m4, m5, m6.
> But If I check the offset then it is set to 7. This means the producer has 
> advanced the offset to 7 while producing messages as a result the consumer 
> will consume messages from offset 7 now i.e. the consumer will not consume 
> messages m4,m5,m6.
> Please help ideally when when consumer comes up again it should read messages 
> from m4. SimpleConsumer should have started reading messages from the point 
> where he has left last time once it comes up again.
> I might have filed it in a wrong category as bug, but need help to just 
> consume unprocessed messages only. Please help with 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-725) Broker Exception: Attempt to read with a maximum offset less than start offset

2014-12-05 Thread lokesh Birla (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235782#comment-14235782
 ] 

lokesh Birla commented on KAFKA-725:


Neha,

I still see this issue in 0.8.1.1.

https://issues.apache.org/jira/browse/KAFKA-1806

> Broker Exception: Attempt to read with a maximum offset less than start offset
> --
>
> Key: KAFKA-725
> URL: https://issues.apache.org/jira/browse/KAFKA-725
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>Assignee: Jay Kreps
>
> I have a simple consumer that's reading from a single topic/partition pair. 
> Running it seems to trigger these messages on the broker periodically:
> 2013/01/22 23:04:54.936 ERROR [KafkaApis] [kafka-request-handler-4] [kafka] 
> []  [KafkaApi-466] error when processing request (MyTopic,4,7951732,2097152)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (7951715) less than the start offset (7951732).
> at kafka.log.LogSegment.read(LogSegment.scala:105)
> at kafka.log.Log.read(Log.scala:390)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:372)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:330)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:326)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.map(Map.scala:93)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:326)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:165)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:164)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at 
> kafka.server.KafkaApis.maybeUnblockDelayedFetchRequests(KafkaApis.scala:164)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:186)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:185)
> at scala.collection.immutable.Map$Map2.foreach(Map.scala:127)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:185)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> at java.lang.Thread.run(Thread.java:619)
> When I shut the consumer down, I don't see the exceptions anymore.
> This is the code that my consumer is running:
>   while(true) {
> // we believe the consumer to be connected, so try and use it for 
> a fetch request
> val request = new FetchRequestBuilder()
>   .addFetch(topic, partition, nextOffset, fetchSize)
>   .maxWait(Int.MaxValue)
>   // TODO for super high-throughput, might be worth waiting for 
> more bytes
>   .minBytes(1)
>   .build
> debug("Fetching messages for stream %s and offset %s." format 
> (streamPartition, nextOffset))
> val messages = connectedConsumer.fetch(request)
> debug("Fetch complete for stream %s and offset %s. Got messages: 
> %s" format (streamPartition, nextOffset, messages))
> if (messages.hasError) {
>   warn("Got error code from broker for %s: %s. Shutting down 
> consumer to trigger a reconnect." format (streamPartition, 
> messages.errorCode(topic, partition)))
>   ErrorMapping.maybeThrowException(messages.errorCode(topic, 
> partition))
> }
> messages.messageSet(topic, partition).foreach(msg => {
>   watchers.foreach(_.onMessagesReady(msg.offset.toString, 
> msg.message.payload))
>   nextOffset = msg.nextOffset
> })
>   }
> Any idea what might be causing this error?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1808) SimpleConsumer not working as expected

2014-12-05 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235741#comment-14235741
 ] 

Neha Narkhede commented on KAFKA-1808:
--

Please look at the "Finding Starting Offset for Reads" section of the example 
to know how to retrieve offsets from the previous run of the consumer. You may 
need to close troubleshoot how your consumer maintains it's offsets and how it 
retrieves it on restart. You can also look at the Also, these sort of questions 
are best suited for the mailing list. JIRAs are filed once it is determined 
that we may be looking at a bug.

> SimpleConsumer not working as expected
> --
>
> Key: KAFKA-1808
> URL: https://issues.apache.org/jira/browse/KAFKA-1808
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
> Environment: linux_x64
>Reporter: ajay
>Assignee: Neha Narkhede
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I followed following link
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
> The consumers are consuming messages from a specific partition. But the issue 
> is that when my consumer is running and I push messages to a specif partition 
> of a topic using a producer it consumes messages from that partition. 
> But if my consumer is not running at present and I push some messages to the 
> topic and again start the consumer it do not consume the messages which were 
> pushed by the producer but again it is ready to consume messages which will 
> be pushed now. 
> I am using LatestTime() instead of EarliestTime() as I want to consume only 
> unprocessed messages.
> For example:
> Case -1
> Consumer is running :
> Producer pushed M1, M2, M3 message to partition 1 of topic 1
> result: consumer will consume all three messages.
> Case - 2
> Consumer is not running
> producer now pushes m4, m5 m6 messgae to partition 1 of topic 1
> consumer is invoked now
> result : consumer do not consume messages m4, m5, m6.
> But If I check the offset then it is set to 7. This means the producer has 
> advanced the offset to 7 while producing messages as a result the consumer 
> will consume messages from offset 7 now i.e. the consumer will not consume 
> messages m4,m5,m6.
> Please help ideally when when consumer comes up again it should read messages 
> from m4. SimpleConsumer should have started reading messages from the point 
> where he has left last time once it comes up again.
> I might have filed it in a wrong category as bug, but need help to just 
> consume unprocessed messages only. Please help with 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #349

2014-12-05 Thread Apache Jenkins Server
See 

Changes:

[joe.stein] KAFKA-1173 Using Vagrant to get up and running with Apache Kafka 
patch by Ewen Cheslack-Postava reviewed by Joe Stein

--
[...truncated 1101 lines...]
kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > t

[jira] [Updated] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-12-05 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1173:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to trunk

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch, 
> KAFKA-1173_2014-11-18_16:01:33.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1808) SimpleConsumer not working as expected

2014-12-05 Thread ajay (JIRA)
ajay created KAFKA-1808:
---

 Summary: SimpleConsumer not working as expected
 Key: KAFKA-1808
 URL: https://issues.apache.org/jira/browse/KAFKA-1808
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.1.1
 Environment: linux_x64
Reporter: ajay
Assignee: Neha Narkhede


I followed following link

https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example

The consumers are consuming messages from a specific partition. But the issue 
is that when my consumer is running and I push messages to a specif partition 
of a topic using a producer it consumes messages from that partition. 

But if my consumer is not running at present and I push some messages to the 
topic and again start the consumer it do not consume the messages which were 
pushed by the producer but again it is ready to consume messages which will be 
pushed now. 

I am using LatestTime() instead of EarliestTime() as I want to consume only 
unprocessed messages.

For example:

Case -1

Consumer is running :

Producer pushed M1, M2, M3 message to partition 1 of topic 1

result: consumer will consume all three messages.

Case - 2

Consumer is not running

producer now pushes m4, m5 m6 messgae to partition 1 of topic 1

consumer is invoked now

result : consumer do not consume messages m4, m5, m6.

But If I check the offset then it is set to 7. This means the producer has 
advanced the offset to 7 while producing messages as a result the consumer will 
consume messages from offset 7 now i.e. the consumer will not consume messages 
m4,m5,m6.

Please help ideally when when consumer comes up again it should read messages 
from m4. SimpleConsumer should have started reading messages from the point 
where he has left last time once it comes up again.

I might have filed it in a wrong category as bug, but need help to just consume 
unprocessed messages only. Please help with 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)