Build failed in Jenkins: kafka_system_tests #126

2015-10-31 Thread ewen
See 

--
Started by timer
Building on master in workspace 

[kafka_system_tests] $ /bin/bash /tmp/hudson6104015082044430119.sh
Running command: git pull && ./gradlew clean jar

Running command: which virtualenv

Running command: . 

 pip uninstall ducktape -y

Running command: . 

 pip uninstall ducktape -y

Running command: . 

 pip install ducktape==2.5.0

Running command: vagrant destroy -f

Build step 'Execute shell' marked build as failure


[jira] [Created] (KAFKA-2717) Add kafka logbak appender

2015-10-31 Thread Xin Wang (JIRA)
Xin Wang created KAFKA-2717:
---

 Summary: Add kafka logbak appender
 Key: KAFKA-2717
 URL: https://issues.apache.org/jira/browse/KAFKA-2717
 Project: Kafka
  Issue Type: New Feature
Reporter: Xin Wang


Since many applications use logback as their log framework.
So, LogbakAppender would make it easier for integrating with kafka, just like 
Log4jAppender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2717) Add kafka logbak appender

2015-10-31 Thread Xin Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Wang updated KAFKA-2717:

Description: 
Since many applications use logback as their log framework.
So, KafkaLogbakAppender would make it easier for integrating with kafka, just 
like KafkaLog4jAppender.

  was:
Since many applications use logback as their log framework.
So, LogbakAppender would make it easier for integrating with kafka, just like 
Log4jAppender.


> Add kafka logbak appender
> -
>
> Key: KAFKA-2717
> URL: https://issues.apache.org/jira/browse/KAFKA-2717
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Xin Wang
>
> Since many applications use logback as their log framework.
> So, KafkaLogbakAppender would make it easier for integrating with kafka, just 
> like KafkaLog4jAppender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2717: Add kafka logbak appender

2015-10-31 Thread vesense
GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/398

KAFKA-2717: Add kafka logbak appender

https://issues.apache.org/jira/browse/KAFKA-2717

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka kafka-logback

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/398.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #398


commit 8403829dd194ad99f507acc94a01274a6d2b5f39
Author: vesense 
Date:   2015-10-30T08:34:21Z

ignore subproject .gitignore file

commit eb7d7d512f2087f2914c6e538b242c2c6d98d5d3
Author: vesense 
Date:   2015-10-30T13:14:06Z

add logback appender

commit 9de9c4ed45c2b3fc4263c2b0d1dd14b2f40fbb12
Author: vesense 
Date:   2015-10-31T10:44:59Z

Add kafka logbak appender




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2717) Add kafka logbak appender

2015-10-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983932#comment-14983932
 ] 

ASF GitHub Bot commented on KAFKA-2717:
---

GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/398

KAFKA-2717: Add kafka logbak appender

https://issues.apache.org/jira/browse/KAFKA-2717

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka kafka-logback

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/398.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #398


commit 8403829dd194ad99f507acc94a01274a6d2b5f39
Author: vesense 
Date:   2015-10-30T08:34:21Z

ignore subproject .gitignore file

commit eb7d7d512f2087f2914c6e538b242c2c6d98d5d3
Author: vesense 
Date:   2015-10-30T13:14:06Z

add logback appender

commit 9de9c4ed45c2b3fc4263c2b0d1dd14b2f40fbb12
Author: vesense 
Date:   2015-10-31T10:44:59Z

Add kafka logbak appender




> Add kafka logbak appender
> -
>
> Key: KAFKA-2717
> URL: https://issues.apache.org/jira/browse/KAFKA-2717
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Xin Wang
>
> Since many applications use logback as their log framework.
> So, KafkaLogbakAppender would make it easier for integrating with kafka, just 
> like KafkaLog4jAppender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1850) Failed reassignment leads to additional replica

2015-10-31 Thread Alex Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983979#comment-14983979
 ] 

Alex Tian commented on KAFKA-1850:
--

Sorry for my late reply. This is my solution for similar problems in Release 
0.8.2.1:

I just wait until all replicas for each partition appear in her ISR list; 
Although it still shows most of partition reassignments failed, I re-execute 
the reassignment, and then it shows all reassignments succeeded, and old 
replicas are deleted finally.

In my old problems for Release 0.8.1, I guess the reason is that I select a 
wrong target machine, which might have limited free disk space, which was not 
enough for the reassignment scheduled by Kafka.

Thank you very much for your help. Again, I should apologize for my late reply.

> Failed reassignment leads to additional replica
> ---
>
> Key: KAFKA-1850
> URL: https://issues.apache.org/jira/browse/KAFKA-1850
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
> Environment: CentOS  (Linux Kernel 2.6.32-71.el6.x86_64 )
>Reporter: Alex Tian
>Assignee: Neha Narkhede
>Priority: Minor
>  Labels: newbie
> Attachments: Track on testingTopic-9's movement.txt, 
> track_on_testingTopic-9_movement_on_the_following_2_days.txt
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> When I start a topic reassignment (Totally 36 partitions) in my Kafka 
> Cluster, 24 partitions succeeded and 12 ones failed. However, the 12 failed 
> partitions have more replicas. I think the reason is that  AR still consists 
> of RAR and OAR although the reassignment for the partition failed. Could we 
> regard this problem as a bug? Quite sorry if any mistake in my question, 
> since I am a beginner for Kafka.
> This is the output from operation: 
> 1. alex-topics-to-move.json:
> {"topics": [{"topic": "testingTopic"}],
>  "version":1
> }
> 2. Generate a reassignment plan
> $./kafka-reassign-partitions.sh  --generate  --broker-list 0,1,2,3,4 
> --topics-to-move-json-file ./alex-topics-to-move.json   --zookeeper 
> 192.168.112.95:2181,192.168.112.96:2181,192.168.112.97:2181,192.168.112.98:2181,192.168.112.99:2181
> Current partition replica assignment
> {"version":1,
>  "partitions":[   {"topic":"testingTopic","partition":27,"replicas":[0,2]},
>
> {"topic":"testingTopic","partition":1,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":12,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":6,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":16,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":32,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":18,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":31,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":9,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":23,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":19,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":34,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":17,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":7,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":20,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":8,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":11,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":3,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":30,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":35,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":26,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":22,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":10,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":24,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":21,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":15,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":4,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":28,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":25,"replicas":[1,2]},:
>   {"topic":"testingTopic","partition":14,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":2,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":13,"replicas":[1,2]},
>   {"topic

[jira] [Updated] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-10-31 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-2255:

Affects Version/s: 0.8.2.0

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2680) Zookeeper SASL check prevents any SASL code being run with IBM JDK

2015-10-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984033#comment-14984033
 ] 

ASF GitHub Bot commented on KAFKA-2680:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/357


> Zookeeper SASL check prevents any SASL code being run with IBM JDK
> --
>
> Key: KAFKA-2680
> URL: https://issues.apache.org/jira/browse/KAFKA-2680
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Vendor-specific code in JaasUtils prevents Kafka running with IBM JDK if a 
> Jaas configuration file is provided.
> {quote}
> java.security.NoSuchAlgorithmException: JavaLoginConfig Configuration 
> not available
> at sun.security.jca.GetInstance.getInstance(GetInstance.java:210)
> at 
> javax.security.auth.login.Configuration.getInstance(Configuration.java:341)
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:100)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2680: Use IBM ConfigFile class to load j...

2015-10-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/357


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2680) Zookeeper SASL check prevents any SASL code being run with IBM JDK

2015-10-31 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2680:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 357
[https://github.com/apache/kafka/pull/357]

> Zookeeper SASL check prevents any SASL code being run with IBM JDK
> --
>
> Key: KAFKA-2680
> URL: https://issues.apache.org/jira/browse/KAFKA-2680
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Vendor-specific code in JaasUtils prevents Kafka running with IBM JDK if a 
> Jaas configuration file is provided.
> {quote}
> java.security.NoSuchAlgorithmException: JavaLoginConfig Configuration 
> not available
> at sun.security.jca.GetInstance.getInstance(GetInstance.java:210)
> at 
> javax.security.auth.login.Configuration.getInstance(Configuration.java:341)
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:100)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1695) Authenticate connection to Zookeeper

2015-10-31 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-1695.

Resolution: Fixed

Resolving this jira since all sub-jiras are done.

> Authenticate connection to Zookeeper
> 
>
> Key: KAFKA-1695
> URL: https://issues.apache.org/jira/browse/KAFKA-1695
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.0.0
>
>
> We need to make it possible to secure the Zookeeper cluster Kafka is using. 
> This would make use of the normal authentication ZooKeeper provides. 
> ZooKeeper supports a variety of authentication mechanisms so we will need to 
> figure out what has to be passed in to the zookeeper client.
> The intention is that when the current round of client work is done it should 
> be possible to run without clients needing access to Zookeeper so all we need 
> here is to make it so that only the Kafka cluster is able to read and write 
> to the Kafka znodes  (we shouldn't need to set any kind of acl on a per-znode 
> basis).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_system_tests #127

2015-10-31 Thread ewen
See 

--
Started by user Geoff
Building in workspace 
[kafka_system_tests] $ /bin/bash /tmp/hudson4612275777450431607.sh
Running command: git pull && ./gradlew clean jar

Running command: which virtualenv

Running command: . 

 pip uninstall ducktape -y

Running command: . 

 pip install ducktape==2.5.0

Running command: vagrant destroy -f

Build step 'Execute shell' marked build as failure


Re: [VOTE] KIP-38: ZooKeeper authentication

2015-10-31 Thread Flavio Junqueira
With 4 binding votes and 9 non-binding votes, the KIP-38 proposal passes. 

Thanks everyone for voting and comments.

-Flavio

> On 22 Oct 2015, at 23:34, Jun Rao  wrote:
> 
> +1
> 
> Thanks,
> 
> Jun
> 
> On Wed, Oct 21, 2015 at 8:17 AM, Flavio Junqueira  wrote:
> 
>> Thanks everyone for the feedback so far. At this point, I'd like to start
>> a vote for KIP-38.
>> 
>> Summary: Add support for ZooKeeper authentication
>> KIP page:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-38%3A+ZooKeeper+Authentication
>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-38:+ZooKeeper+Authentication
>>> 
>> 
>> Thanks,
>> -Flavio



Build failed in Jenkins: kafka-trunk-jdk8 #84

2015-10-31 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2680; Use IBM ConfigFile class to load jaas config if IBM JDK

--
[...truncated 4067 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets 

Build failed in Jenkins: kafka-trunk-jdk7 #743

2015-10-31 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2680; Use IBM ConfigFile class to load jaas config if IBM JDK

--
[...truncated 3019 lines...]

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testWrongSerializer PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.SaslPlaintextConsumerTest > testPauseStateNotPreservedByRebalance 
PASSED

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SslConsumerTest > testListTopics PASSED

kafka.api.SslC

Re: [VOTE] KIP-38: ZooKeeper authentication

2015-10-31 Thread Flavio Junqueira
For the record, here are the +1 votes I counted (no -1 vote):

Binding: Jay Kreps, Joel Koshy, Neha Narkhede, Jun Rao
Non-binding: Onur Karaman, Grant Henke, Dong Lin, Jiangjie Qin, Todd Palino, 
Edward Ribeiro, Ashish Singh, Ismael Juma, Aditya Auradkar

-Flavio

> On 31 Oct 2015, at 17:06, Flavio Junqueira  wrote:
> 
> With 4 binding votes and 9 non-binding votes, the KIP-38 proposal passes. 
> 
> Thanks everyone for voting and comments.
> 
> -Flavio
> 
>> On 22 Oct 2015, at 23:34, Jun Rao  wrote:
>> 
>> +1
>> 
>> Thanks,
>> 
>> Jun
>> 
>> On Wed, Oct 21, 2015 at 8:17 AM, Flavio Junqueira  wrote:
>> 
>>> Thanks everyone for the feedback so far. At this point, I'd like to start
>>> a vote for KIP-38.
>>> 
>>> Summary: Add support for ZooKeeper authentication
>>> KIP page:
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-38%3A+ZooKeeper+Authentication
>>> <
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-38:+ZooKeeper+Authentication
 
>>> 
>>> Thanks,
>>> -Flavio
> 



[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-31 Thread Vinoth Chandar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984243#comment-14984243
 ] 

Vinoth Chandar commented on KAFKA-2580:
---

Based on this, looks like we can close this? 

>> So a lot of this comes down to the implementation. A naive 10k item LRU 
>> cache could easily be far more memory hungry than having 50k open FDs, plus 
>> being in heap this would add a huge number of objects to manage.

[~jkreps] I am a little confused. What I meant by LRU cache was simply limiting 
the number of "java.io.File" objects (or equivalent in Kafka codebase) that 
represents the handle to the segment. So, if there are 10K such objects in a 
(properly sized) ConcurrentHashMap, how would that add to the memory overhead 
so much, compared to holding 50K/200K objects anyway?

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2528) Quota Performance Evaluation

2015-10-31 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984276#comment-14984276
 ] 

Dong Lin commented on KAFKA-2528:
-

[~jkreps] Sorry for late reply. I just test the quota using latest trunk. 
Please find the results below. 

Configuration: The test is run with one broker, one producer performance 
configured with topic=test record-size=1 --throughput=10, and one 
console consumer which reads from topic “test” at maximum possible throughput. 
Consumer always runs after producer stops. Bytes-in and bytes-out rates are 
collected using one minute average after the values stabilize.

1) Unlimited quota. Broker’s bytes-in and bytes-out rates are 85 MBps and 250 
MBps.
2) 1 MBps quota for both producer and consumer. Broker’s bytes-in and bytes-out 
rates are 0.95 MBps and 0.98 MBps.
3) 10 MBps quota for both producer and consumer. Broker’s bytes-in and 
bytes-out rates are 9.8 MBps and 9.9 MBps.
4) 50 MBps quota for both producer and consumer. Broker’s bytes-in and 
bytes-out rates are 49 MBps and 49 MBps.

It appears that quota from latest trunk is working correctly now. I didn't try 
to reproduce the problem in the original report, where the broker may have 2 
MBps bytes-in rate in inGraph even when configured with 1 MBps produce quota. 
The difference in result may possibly due to change made in Rate.java in 
https://github.com/apache/kafka/pull/323.




> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
> Attachments: QuotaPerformanceEvaluation.pdf
>
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2528) Quota Performance Evaluation

2015-10-31 Thread Dong Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated KAFKA-2528:

Attachment: (was: QuotaPerformanceEvaluation.pdf)

> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2528) Quota Performance Evaluation

2015-10-31 Thread Dong Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated KAFKA-2528:

Attachment: QuotaPerformanceEvaluationRelease.pdf

> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
> Attachments: QuotaPerformanceEvaluationRelease.pdf
>
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2528) Quota Performance Evaluation

2015-10-31 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984281#comment-14984281
 ] 

Dong Lin commented on KAFKA-2528:
-

Updated the pdf to use latest evaluation results.

> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
> Attachments: QuotaPerformanceEvaluationRelease.pdf
>
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)