[jira] [Work started] (KAFKA-2361) Unit Test BUILD FAILED

2015-11-18 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2361 started by jin xing.
---
> Unit Test BUILD FAILED
> --
>
> Key: KAFKA-2361
> URL: https://issues.apache.org/jira/browse/KAFKA-2361
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>Assignee: jin xing
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 290 tests completed, 2 failed :
> kafka.api.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:355)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected KafkaServer setup to 
> fail with connection exception but caught a different exception.
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2361) Unit Test BUILD FAILED

2015-11-18 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing resolved KAFKA-2361.
-
Resolution: Fixed

> Unit Test BUILD FAILED
> --
>
> Key: KAFKA-2361
> URL: https://issues.apache.org/jira/browse/KAFKA-2361
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>Assignee: jin xing
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 290 tests completed, 2 failed :
> kafka.api.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:355)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected KafkaServer setup to 
> fail with connection exception but caught a different exception.
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2361) Unit Test BUILD FAILED

2015-11-18 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing reassigned KAFKA-2361:
---

Assignee: jin xing

> Unit Test BUILD FAILED
> --
>
> Key: KAFKA-2361
> URL: https://issues.apache.org/jira/browse/KAFKA-2361
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>Assignee: jin xing
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 290 tests completed, 2 failed :
> kafka.api.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:355)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected KafkaServer setup to 
> fail with connection exception but caught a different exception.
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2361) Unit Test BUILD FAILED

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012730#comment-15012730
 ] 

ASF GitHub Bot commented on KAFKA-2361:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/558

KAFKA-2361: unit test failure in ProducerFailureHandlingTest.testNotE…

the same issue stated at https://issues.apache.org/jira/browse/KAFKA-1999, 
but on branch 0.8.2, still not fixed @junrao 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/558.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #558


commit da1d0d88cd4fd8819b2bbc1068a4473062a041e9
Author: jinxing 
Date:   2015-11-19T03:42:05Z

KAFKA-2361: unit test failure in 
ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown




> Unit Test BUILD FAILED
> --
>
> Key: KAFKA-2361
> URL: https://issues.apache.org/jira/browse/KAFKA-2361
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 290 tests completed, 2 failed :
> kafka.api.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:355)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected KafkaServer setup to 
> fail with connection exception but caught a different exception.
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:136)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2361: unit test failure in ProducerFailu...

2015-11-18 Thread ZoneMayor
GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/558

KAFKA-2361: unit test failure in ProducerFailureHandlingTest.testNotE…

the same issue stated at https://issues.apache.org/jira/browse/KAFKA-1999, 
but on branch 0.8.2, still not fixed @junrao 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/558.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #558


commit da1d0d88cd4fd8819b2bbc1068a4473062a041e9
Author: jinxing 
Date:   2015-11-19T03:42:05Z

KAFKA-2361: unit test failure in 
ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #168

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[confluent] KAFKA-2845: new client old broker compatibility

[confluent] KAFKA-2860: better handling of auto commit errors

--
[...truncated 4727 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.Lo

[GitHub] kafka pull request: MINOR: Use /usr/bin/env to find bash

2015-11-18 Thread dwimsey
GitHub user dwimsey opened a pull request:

https://github.com/apache/kafka/pull/557

MINOR: Use /usr/bin/env to find bash

This is a minor change to the shell scripts in the bin directory that use 
bash which switches them to find bash via /usr/bin/env which should provide 
more cross platform compatibility.  Specifically it will allow these scripts to 
work out of the box on the various BSDs with bash is installed in the default 
manner, in /usr/local/bin instead of /bin and should remain compatible with 
other OSes that typically put most things in /bin

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dwimsey/kafka feature/env_bash

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/557.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #557


commit ce972f6d30a1e8156b5faa60c95204baf25aba56
Author: David Wimsey 
Date:   2015-11-19T02:26:21Z

Switch to using /usr/bin/env bash instead of /bin/bash to find and execute 
bash in shell scripts to improve cross platform compatibility.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #836

2015-11-18 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2820) System tests: log level is no longer propagating from service classes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012616#comment-15012616
 ] 

ASF GitHub Bot commented on KAFKA-2820:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/556

KAFKA-2820: Remove log threshold on appender in tools-log4j.properties

Removed a config in tools-log4j.properties which prevented certain service 
classes from running at TRACE level.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2820-systest-tool-loglevel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/556.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #556


commit d58be9e6ed248882ef669e5d402dbb427a153eb7
Author: Geoff Anderson 
Date:   2015-11-19T01:53:52Z

Remove log threshold on appender in tools-log4j.properties




> System tests: log level is no longer propagating from service classes
> -
>
> Key: KAFKA-2820
> URL: https://issues.apache.org/jira/browse/KAFKA-2820
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Many system test service classes specify a log level which should be 
> reflected in the log4j output of the corresponding kafka tools etc.
> However, at least some these log levels are no longer propagating, which 
> makes tests much harder to debug after they have run.
> E.g. KafkaService specifies a DEBUG log level, but all collected log output 
> from brokers is at INFO level or above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2820: Remove log threshold on appender i...

2015-11-18 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/556

KAFKA-2820: Remove log threshold on appender in tools-log4j.properties

Removed a config in tools-log4j.properties which prevented certain service 
classes from running at TRACE level.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2820-systest-tool-loglevel

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/556.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #556


commit d58be9e6ed248882ef669e5d402dbb427a153eb7
Author: Geoff Anderson 
Date:   2015-11-19T01:53:52Z

Remove log threshold on appender in tools-log4j.properties




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2860) New consumer should handle auto-commit errors more gracefully

2015-11-18 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2860.
--
   Resolution: Fixed
 Reviewer: Guozhang Wang
Fix Version/s: 0.9.1.0

> New consumer should handle auto-commit errors more gracefully
> -
>
> Key: KAFKA-2860
> URL: https://issues.apache.org/jira/browse/KAFKA-2860
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> Currently errors encountered when doing async commits with autocommit enabled 
> end up polluting user logs. This includes both transient commit failures 
> (such as disconnects) as well as permanent failures (such as illegal 
> generation). It would be nice to handle these errors more gracefully by 
> retrying for transient failures and perhaps logging the permanent failures as 
> warnings since the stack trace doesn't really help the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2860: better handling of auto commit err...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/553


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2860) New consumer should handle auto-commit errors more gracefully

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012537#comment-15012537
 ] 

ASF GitHub Bot commented on KAFKA-2860:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/553


> New consumer should handle auto-commit errors more gracefully
> -
>
> Key: KAFKA-2860
> URL: https://issues.apache.org/jira/browse/KAFKA-2860
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> Currently errors encountered when doing async commits with autocommit enabled 
> end up polluting user logs. This includes both transient commit failures 
> (such as disconnects) as well as permanent failures (such as illegal 
> generation). It would be nice to handle these errors more gracefully by 
> retrying for transient failures and perhaps logging the permanent failures as 
> warnings since the stack trace doesn't really help the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012528#comment-15012528
 ] 

Geoff Anderson edited comment on KAFKA-2845 at 11/19/15 1:15 AM:
-

[~guozhang] Yeah, that seems consistent with what is happening. Here is an 
example of the behavior which these tests essentially document:

0.9 java producer issues V1 produce request
0.8.2.2 broker returns V0 produce request response
0.9 java producer tries to parse the response it receives (V0) as V1 since that 
is what it expects. These schemas differ => parse error on producer



was (Author: granders):
[~guozhang] An example of the behavior we see:

0.9 java producer issues V1 produce request
0.8.2.2 broker returns V0 produce request response
0.9 java producer tries to parse the response it receives (V0) as V1 since that 
is what it expects. These schemas differ => parse error on producer


> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012529#comment-15012529
 ] 

ASF GitHub Bot commented on KAFKA-2845:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/537


> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012528#comment-15012528
 ] 

Geoff Anderson edited comment on KAFKA-2845 at 11/19/15 1:13 AM:
-

[~guozhang] An example of the behavior we see:

0.9 java producer issues V1 produce request
0.8.2.2 broker returns V0 produce request response
0.9 java producer tries to parse the response it receives (V0) as V1 since that 
is what it expects. These schemas differ => parse error on producer



was (Author: granders):
[~guozhang] An example of the behavior we see:

0.9 java producer issues V1 produce request
0.8.2.2 broker returns V0 produce request response
0.9 java producer tries to parse the response it receives (V0) as V1 since that 
is what it expects. These differ => parse error on producer


> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2845:
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012528#comment-15012528
 ] 

Geoff Anderson commented on KAFKA-2845:
---

[~guozhang] An example of the behavior we see:

0.9 java producer issues V1 produce request
0.8.2.2 broker returns V0 produce request response
0.9 java producer tries to parse the response it receives (V0) as V1 since that 
is what it expects. These differ => parse error on producer


> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2845: new client old broker compatibilit...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/537


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2846) Add Ducktape test for kafka-consumer-groups

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012518#comment-15012518
 ] 

ASF GitHub Bot commented on KAFKA-2846:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/555

KAFKA-2846: Add Ducktape test for kafka-consumer-groups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2846

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/555.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #555


commit 623cffab6f6154242546607ee5acf680aef7c9d8
Author: Ashish Singh 
Date:   2015-11-19T00:28:53Z

KAFKA-2846: Add Ducktape test for kafka-consumer-groups




> Add Ducktape test for kafka-consumer-groups
> ---
>
> Key: KAFKA-2846
> URL: https://issues.apache.org/jira/browse/KAFKA-2846
> Project: Kafka
>  Issue Type: Test
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> kafka-consumer-groups is a user facing tool. Having system tests will make 
> sure that we are not changing its behavior unintentionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012516#comment-15012516
 ] 

Guozhang Wang commented on KAFKA-2845:
--

[~geoffra] In 0.8.2.2 we do not use the matching version id of the request but 
always the latest version; in 0.9.0.0 we have changed this behavior, on the 
other hand in 0.8.2.2 there is no version changes against any older versions of 
0.8.2.2.

> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2846: Add Ducktape test for kafka-consum...

2015-11-18 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/555

KAFKA-2846: Add Ducktape test for kafka-consumer-groups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2846

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/555.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #555


commit 623cffab6f6154242546607ee5acf680aef7c9d8
Author: Ashish Singh 
Date:   2015-11-19T00:28:53Z

KAFKA-2846: Add Ducktape test for kafka-consumer-groups




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-18 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012497#comment-15012497
 ] 

Guozhang Wang commented on KAFKA-2843:
--

Is there a leader failover round the same time for that partition? There is a 
potential issue described in KAFKA-2334 but is only possible if a leader 
failover happens.

> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>Assignee: jin xing
>
> I use simple consumer fetch message from brokers (fetchSize > 
> messageSize),when consumer got empty messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(topic, partition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #167

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2859: Fix deadlock in WorkerSourceTask.

[confluent] KAFKA-2820: systest log level

--
[...truncated 1451 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.G

[jira] [Resolved] (KAFKA-1904) run sanity failed test

2015-11-18 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-1904.
--
Resolution: Won't Fix
  Assignee: Ewen Cheslack-Postava

This was from the old system test framework, which has since been retired.

> run sanity failed test
> --
>
> Key: KAFKA-1904
> URL: https://issues.apache.org/jira/browse/KAFKA-1904
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Attachments: run_sanity.log.gz
>
>
> _test_case_name  :  testcase_1
> _test_class_name  :  ReplicaBasicTest
> arg : bounce_broker  :  true
> arg : broker_type  :  leader
> arg : message_producing_free_time_sec  :  15
> arg : num_iteration  :  2
> arg : num_messages_to_produce_per_producer_call  :  50
> arg : num_partition  :  2
> arg : replica_factor  :  3
> arg : sleep_seconds_between_producer_calls  :  1
> validation_status  : 
>  Test completed  :  FAILED



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012465#comment-15012465
 ] 

Geoff Anderson edited comment on KAFKA-2845 at 11/19/15 12:34 AM:
--

[~gwenshap] That is the behavior we expected... but our discovery when adding 
these tests was that the 0.8.X brokers *don't* close the connection for at 
least some requests, and the error occurs on the client side when it tries to 
parse the response.

I looked at NetworkClient.handleCompletedRecieves in 0.8.2.2, and maybe I'm 
missing something, but I don't see what appears to be a version check there? It 
seems like ProtoUtils.currentResponseSchema just finds the latest version for 
the given apiKey?

{code:title=NetworkClient.java(0.8.2.2)}
private void handleCompletedReceives(List responses, long 
now) {
for (NetworkReceive receive : this.selector.completedReceives()) {
int source = receive.source();
ClientRequest req = inFlightRequests.completeNext(source);
ResponseHeader header = ResponseHeader.parse(receive.payload());
short apiKey = req.request().header().apiKey();
Struct body = (Struct) 
ProtoUtils.currentResponseSchema(apiKey).read(receive.payload());
correlate(req.request().header(), header);
if (apiKey == ApiKeys.METADATA.id) {
handleMetadataResponse(req.request().header(), body, now);
} else {
// need to add body/header to response here
responses.add(new ClientResponse(req, now, false, body));
}
}
}
{code}

{code:title=ProtoUtils.java(0.8.2.2)}
public static Schema currentResponseSchema(int apiKey) {
return schemaFor(Protocol.RESPONSES, apiKey, latestVersion(apiKey));
}
{code}



was (Author: granders):
[~gwenshap] That is the behavior we expected... but our discovery when adding 
these tests was that the 0.8.X brokers *don't* close the connection for at 
least some requests, and the error occurs on the client side when it tries to 
parse the response.

I looked at NetworkClient.handleCompletedRecieves in 0.8.2.2, and maybe I'm 
missing something, but I don't see what appears to be a version check there? It 
seems like ProtoUtils.currentResponseSchema just finds the latest version for 
the given apiKey?

{code:title=NetworkClient.java}
private void handleCompletedReceives(List responses, long 
now) {
for (NetworkReceive receive : this.selector.completedReceives()) {
int source = receive.source();
ClientRequest req = inFlightRequests.completeNext(source);
ResponseHeader header = ResponseHeader.parse(receive.payload());
short apiKey = req.request().header().apiKey();
Struct body = (Struct) 
ProtoUtils.currentResponseSchema(apiKey).read(receive.payload());
correlate(req.request().header(), header);
if (apiKey == ApiKeys.METADATA.id) {
handleMetadataResponse(req.request().header(), body, now);
} else {
// need to add body/header to response here
responses.add(new ClientResponse(req, now, false, body));
}
}
}
{code}

{code:title=ProtoUtils.java}
public static Schema currentResponseSchema(int apiKey) {
return schemaFor(Protocol.RESPONSES, apiKey, latestVersion(apiKey));
}
{code}


> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012465#comment-15012465
 ] 

Geoff Anderson commented on KAFKA-2845:
---

[~gwenshap] That is the behavior we expected... but our discovery when adding 
these tests was that the 0.8.X brokers *don't* close the connection for at 
least some requests, and the error occurs on the client side when it tries to 
parse the response.

I looked at NetworkClient.handleCompletedRecieves in 0.8.2.2, and maybe I'm 
missing something, but I don't see what appears to be a version check there? It 
seems like ProtoUtils.currentResponseSchema just finds the latest version for 
the given apiKey?

{code:title=NetworkClient.java}
private void handleCompletedReceives(List responses, long 
now) {
for (NetworkReceive receive : this.selector.completedReceives()) {
int source = receive.source();
ClientRequest req = inFlightRequests.completeNext(source);
ResponseHeader header = ResponseHeader.parse(receive.payload());
short apiKey = req.request().header().apiKey();
Struct body = (Struct) 
ProtoUtils.currentResponseSchema(apiKey).read(receive.payload());
correlate(req.request().header(), header);
if (apiKey == ApiKeys.METADATA.id) {
handleMetadataResponse(req.request().header(), body, now);
} else {
// need to add body/header to response here
responses.add(new ClientResponse(req, now, false, body));
}
}
}
{code}

{code:title=ProtoUtils.java}
public static Schema currentResponseSchema(int apiKey) {
return schemaFor(Protocol.RESPONSES, apiKey, latestVersion(apiKey));
}
{code}


> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2851) system tests: error copying keytab file

2015-11-18 Thread Anna Povzner (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anna Povzner updated KAFKA-2851:

Reviewer: Gwen Shapira
  Status: Patch Available  (was: In Progress)

In the same PR as KAFKA-2825

> system tests: error copying keytab file
> ---
>
> Key: KAFKA-2851
> URL: https://issues.apache.org/jira/browse/KAFKA-2851
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Anna Povzner
>Priority: Minor
>
> It is best to use unique paths for temporary files on the test driver machine 
> so that multiple test jobs don't conflict. 
> If the test driver machine is running multiple ducktape jobs concurrently, as 
> is the case with Confluent nightly test runs, conflicts can occur if the same 
> canonical path is always used.
> In this case, security_config.py copies a file to /tmp/keytab on the test 
> driver machine, while other jobs may remove this from the driver machine. 
> Then you can get errors like this:
> {code}
> 
> test_id:
> 2015-11-17--001.kafkatest.tests.replication_test.ReplicationTest.test_replication_with_broker_failure.security_protocol=SASL_PLAINTEXT.failure_mode=clean_bounce
> status: FAIL
> run time:   1 minute 33.395 seconds
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in test_replication_with_broker_failure
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 66, in run_produce_consume_validate
> core_test_action()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in 
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 43, in clean_bounce
> test.kafka.restart_node(prev_leader_node, clean_shutdown=True)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 275, in restart_node
> self.start_node(node)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 123, in start_node
> self.security_config.setup_node(node)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/security/security_config.py",
>  line 130, in setup_node
> node.account.scp_to(MiniKdc.LOCAL_KEYTAB_FILE, SecurityConfig.KEYTAB_PATH)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/cluster/remoteaccount.py",
>  line 174, in scp_to
> return self._ssh_quiet(self.scp_to_command(src, dest, recursive))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/cluster/remoteaccount.py",
>  line 219, in _ssh_quiet
> raise e
> CalledProcessError: Command 'scp -o 'HostName 52.33.250.202' -o 'Port 22' -o 
> 'UserKnownHostsFile /dev/null' -o 'StrictHostKeyChecking no' -o 
> 'PasswordAuthentication no' -o 'IdentityFile /var/lib/jenkins/muckrake.pem' 
> -o 'IdentitiesOnly yes' -o 'LogLevel FATAL'  /tmp/keytab 
> ubuntu@worker2:/mnt/security/keytab' returned non-zero exit status 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012438#comment-15012438
 ] 

Gwen Shapira commented on KAFKA-2845:
-

[~granders] The broker responds not with the latest version, but with the 
version that matches what it got from the client.  You can look up the code in 
NetworkClient.handleCompletedRecieves, to see how the response schema is 
generated with the request version. 

In an older broker: if the client sends V1, the broker should see it is V1, 
figure out its not a version it can handle (since its an old broker), write an 
error and close the connection.

Thats while new brokers work fine with old clients but not vice-versa.

> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2825) Add controller failover to existing replication tests

2015-11-18 Thread Anna Povzner (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anna Povzner updated KAFKA-2825:

Reviewer: Gwen Shapira
  Status: Patch Available  (was: In Progress)

The PR also contains the fix for KAFKA-2851. Will assign the same reviewer.

> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #835

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2859: Fix deadlock in WorkerSourceTask.

--
[...truncated 2762 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidJoin

[jira] [Updated] (KAFKA-2845) Add 0.9 clients vs 0.8 brokers compatibility test

2015-11-18 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2845:
--
Reviewer: Guozhang Wang  (was: Ewen Cheslack-Postava)

> Add 0.9 clients vs 0.8 brokers compatibility test
> -
>
> Key: KAFKA-2845
> URL: https://issues.apache.org/jira/browse/KAFKA-2845
> Project: Kafka
>  Issue Type: Task
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Add a simple test or two to document and understand what behavior to expect 
> if users try to run 0.9 java producer or 0.9 scala consumer ("old consumer") 
> against an 0.8.X broker cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: add example to send customized message by usin...

2015-11-18 Thread stumped2
Github user stumped2 closed the pull request at:

https://github.com/apache/kafka/pull/495


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2820) System tests: log level is no longer propagating from service classes

2015-11-18 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2820.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

> System tests: log level is no longer propagating from service classes
> -
>
> Key: KAFKA-2820
> URL: https://issues.apache.org/jira/browse/KAFKA-2820
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Many system test service classes specify a log level which should be 
> reflected in the log4j output of the corresponding kafka tools etc.
> However, at least some these log levels are no longer propagating, which 
> makes tests much harder to debug after they have run.
> E.g. KafkaService specifies a DEBUG log level, but all collected log output 
> from brokers is at INFO level or above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2820) System tests: log level is no longer propagating from service classes

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012317#comment-15012317
 ] 

ASF GitHub Bot commented on KAFKA-2820:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/538


> System tests: log level is no longer propagating from service classes
> -
>
> Key: KAFKA-2820
> URL: https://issues.apache.org/jira/browse/KAFKA-2820
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Many system test service classes specify a log level which should be 
> reflected in the log4j output of the corresponding kafka tools etc.
> However, at least some these log levels are no longer propagating, which 
> makes tests much harder to debug after they have run.
> E.g. KafkaService specifies a DEBUG log level, but all collected log output 
> from brokers is at INFO level or above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2820: systest log level

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/538


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka_0.9.0_jdk7 #33

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2859: Fix deadlock in WorkerSourceTask.

--
[...truncated 3482 lines...]
kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

635 tests completed, 1 failed
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> There were failing tests. See the report at: 
> file://

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at 
org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at 
org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at 
org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53)
at 
org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at 
org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:203)
at 
org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:185)
at 
org.gradle.execution.taskgraph.AbstractTaskPlanExecutor$TaskExecutorWorker.processTask(AbstractTaskPlanExecutor.java:62

[jira] [Assigned] (KAFKA-2820) System tests: log level is no longer propagating from service classes

2015-11-18 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson reassigned KAFKA-2820:
-

Assignee: Geoff Anderson

> System tests: log level is no longer propagating from service classes
> -
>
> Key: KAFKA-2820
> URL: https://issues.apache.org/jira/browse/KAFKA-2820
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Many system test service classes specify a log level which should be 
> reflected in the log4j output of the corresponding kafka tools etc.
> However, at least some these log levels are no longer propagating, which 
> makes tests much harder to debug after they have run.
> E.g. KafkaService specifies a DEBUG log level, but all collected log output 
> from brokers is at INFO level or above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #166

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[confluent] MINOR: Update to Gradle 2.9 and update generated `gradlew` file

--
[...truncated 6598 lines...]
org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateAndStop PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSourceConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSinkConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distrib

Build failed in Jenkins: kafka-trunk-jdk7 #834

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[confluent] MINOR: Update to Gradle 2.9 and update generated `gradlew` file

--
[...truncated 1434 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorRespons

[jira] [Updated] (KAFKA-2859) Deadlock in WorkerSourceTask

2015-11-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2859:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 554
[https://github.com/apache/kafka/pull/554]

> Deadlock in WorkerSourceTask
> 
>
> Key: KAFKA-2859
> URL: https://issues.apache.org/jira/browse/KAFKA-2859
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0, 0.9.1.0
>
>
> There is a potential deadlock due to synchronization used around both the 
> producer.send() and in the produce callback in WorkerSourceTask. This can be 
> triggered, for example, if:
> 1. WorkerSourceTask work thread is running sendRecords and therefore owns the 
> lock on itself, then invokes producer.send() and needs a metadata update 
> (sending to a new topic), which causes it to wait on the Metadata class, 
> which is notified by the Sender thread when it has updated metadata.
> 2. Sender thread is processing a message completion, invokes the callback in 
> WorkerSourceTask, which then tries to invoke recordSent, which needs to 
> acquire the WorkerSourceTask lock. It will wait on this lock and never 
> process the metadata update request, so the other thread will never proceed 
> either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2859) Deadlock in WorkerSourceTask

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012174#comment-15012174
 ] 

ASF GitHub Bot commented on KAFKA-2859:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/554


> Deadlock in WorkerSourceTask
> 
>
> Key: KAFKA-2859
> URL: https://issues.apache.org/jira/browse/KAFKA-2859
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> There is a potential deadlock due to synchronization used around both the 
> producer.send() and in the produce callback in WorkerSourceTask. This can be 
> triggered, for example, if:
> 1. WorkerSourceTask work thread is running sendRecords and therefore owns the 
> lock on itself, then invokes producer.send() and needs a metadata update 
> (sending to a new topic), which causes it to wait on the Metadata class, 
> which is notified by the Sender thread when it has updated metadata.
> 2. Sender thread is processing a message completion, invokes the callback in 
> WorkerSourceTask, which then tries to invoke recordSent, which needs to 
> acquire the WorkerSourceTask lock. It will wait on this lock and never 
> process the metadata update request, so the other thread will never proceed 
> either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2859: Fix deadlock in WorkerSourceTask.

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/554


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2861) system tests: grep logs for errors as part of validation

2015-11-18 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2861:
--
Description: 
There may be errors going on under the hood that validation steps do not 
detect, but which are logged at the ERROR level by brokers or clients. We are 
more likely to catch subtle issues if we pattern match the server log for ERROR 
as part of validation, and fail the test in this case.

For example, in https://issues.apache.org/jira/browse/KAFKA-2813, the error is 
transient, so our test may pass; however, we still want this issue to be 
visible.

To avoid spurious failures, we would probably want to be able to have a 
whitelist of acceptable errors.


  was:
There may be errors going on under the hood that validation steps do not 
detect, but which are logged at the ERROR level by brokers or clients. We are 
more likely to catch subtle issues if we pattern match for ERROR as part of 
validation, and fail the test in this case.

For example, in https://issues.apache.org/jira/browse/KAFKA-2813, the error is 
transient, so our test may pass; however, we still want this issue to be 
visible.

To avoid spurious failures, we would probably want to be able to have a 
whitelist of acceptable errors.



> system tests: grep logs for errors as part of validation
> 
>
> Key: KAFKA-2861
> URL: https://issues.apache.org/jira/browse/KAFKA-2861
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>
> There may be errors going on under the hood that validation steps do not 
> detect, but which are logged at the ERROR level by brokers or clients. We are 
> more likely to catch subtle issues if we pattern match the server log for 
> ERROR as part of validation, and fail the test in this case.
> For example, in https://issues.apache.org/jira/browse/KAFKA-2813, the error 
> is transient, so our test may pass; however, we still want this issue to be 
> visible.
> To avoid spurious failures, we would probably want to be able to have a 
> whitelist of acceptable errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2861) system tests: grep logs for errors as part of validation

2015-11-18 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2861:
-

 Summary: system tests: grep logs for errors as part of validation
 Key: KAFKA-2861
 URL: https://issues.apache.org/jira/browse/KAFKA-2861
 Project: Kafka
  Issue Type: Bug
Reporter: Geoff Anderson


There may be errors going on under the hood that validation steps do not 
detect, but which are logged at the ERROR level by brokers or clients. We are 
more likely to catch subtle issues if we pattern match for ERROR as part of 
validation, and fail the test in this case.

For example, in https://issues.apache.org/jira/browse/KAFKA-2813, the error is 
transient, so our test may pass; however, we still want this issue to be 
visible.

To avoid spurious failures, we would probably want to be able to have a 
whitelist of acceptable errors.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2859) Deadlock in WorkerSourceTask

2015-11-18 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2859:
-
Status: Patch Available  (was: Open)

> Deadlock in WorkerSourceTask
> 
>
> Key: KAFKA-2859
> URL: https://issues.apache.org/jira/browse/KAFKA-2859
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> There is a potential deadlock due to synchronization used around both the 
> producer.send() and in the produce callback in WorkerSourceTask. This can be 
> triggered, for example, if:
> 1. WorkerSourceTask work thread is running sendRecords and therefore owns the 
> lock on itself, then invokes producer.send() and needs a metadata update 
> (sending to a new topic), which causes it to wait on the Metadata class, 
> which is notified by the Sender thread when it has updated metadata.
> 2. Sender thread is processing a message completion, invokes the callback in 
> WorkerSourceTask, which then tries to invoke recordSent, which needs to 
> acquire the WorkerSourceTask lock. It will wait on this lock and never 
> process the metadata update request, so the other thread will never proceed 
> either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2859) Deadlock in WorkerSourceTask

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012010#comment-15012010
 ] 

ASF GitHub Bot commented on KAFKA-2859:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/554

KAFKA-2859: Fix deadlock in WorkerSourceTask.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2859-deadlock-worker-source-task

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/554.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #554


commit 30ae78f2e58a56319b7d173c6e308e9c37214eee
Author: Ewen Cheslack-Postava 
Date:   2015-11-18T19:38:29Z

KAFKA-2859: Fix deadlock in WorkerSourceTask.




> Deadlock in WorkerSourceTask
> 
>
> Key: KAFKA-2859
> URL: https://issues.apache.org/jira/browse/KAFKA-2859
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> There is a potential deadlock due to synchronization used around both the 
> producer.send() and in the produce callback in WorkerSourceTask. This can be 
> triggered, for example, if:
> 1. WorkerSourceTask work thread is running sendRecords and therefore owns the 
> lock on itself, then invokes producer.send() and needs a metadata update 
> (sending to a new topic), which causes it to wait on the Metadata class, 
> which is notified by the Sender thread when it has updated metadata.
> 2. Sender thread is processing a message completion, invokes the callback in 
> WorkerSourceTask, which then tries to invoke recordSent, which needs to 
> acquire the WorkerSourceTask lock. It will wait on this lock and never 
> process the metadata update request, so the other thread will never proceed 
> either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2859: Fix deadlock in WorkerSourceTask.

2015-11-18 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/554

KAFKA-2859: Fix deadlock in WorkerSourceTask.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2859-deadlock-worker-source-task

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/554.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #554


commit 30ae78f2e58a56319b7d173c6e308e9c37214eee
Author: Ewen Cheslack-Postava 
Date:   2015-11-18T19:38:29Z

KAFKA-2859: Fix deadlock in WorkerSourceTask.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2860) New consumer should handle auto-commit errors more gracefully

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011961#comment-15011961
 ] 

ASF GitHub Bot commented on KAFKA-2860:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/553

KAFKA-2860: better handling of auto commit errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2860

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/553.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #553


commit 597b1b7d84d1ab597d74b16f07be0b04dbd03cc6
Author: Jason Gustafson 
Date:   2015-11-18T20:30:39Z

KAFKA-2860: better handling of auto commit errors




> New consumer should handle auto-commit errors more gracefully
> -
>
> Key: KAFKA-2860
> URL: https://issues.apache.org/jira/browse/KAFKA-2860
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> Currently errors encountered when doing async commits with autocommit enabled 
> end up polluting user logs. This includes both transient commit failures 
> (such as disconnects) as well as permanent failures (such as illegal 
> generation). It would be nice to handle these errors more gracefully by 
> retrying for transient failures and perhaps logging the permanent failures as 
> warnings since the stack trace doesn't really help the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2860: better handling of auto commit err...

2015-11-18 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/553

KAFKA-2860: better handling of auto commit errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2860

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/553.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #553


commit 597b1b7d84d1ab597d74b16f07be0b04dbd03cc6
Author: Jason Gustafson 
Date:   2015-11-18T20:30:39Z

KAFKA-2860: better handling of auto commit errors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Update to Gradle 2.9 and update generat...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/549


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2643) Run mirror maker tests in ducktape with SSL

2015-11-18 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011784#comment-15011784
 ] 

Rajini Sivaram commented on KAFKA-2643:
---

[~ijuma] Yes, planning to work on this and KAFKA-2642 tomorrow.

> Run mirror maker tests in ducktape with SSL
> ---
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #833

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2800; Update outdated dependencies

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 8f6ffe1c28d43bebf1cef7198c1e154a2adc92c3 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 8f6ffe1c28d43bebf1cef7198c1e154a2adc92c3
 > git rev-list 17c6f33126823c4f54ea22ed2284267271b58e21 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3000308825690061892.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 23.574 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1599798700770937124.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_6
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
Download 
https://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.2/snappy-java-1.1.2.pom
Download 
https://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.2/snappy-java-1.1.2.jar
:jar_core_2_10_6 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 22.345 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-11-18 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011781#comment-15011781
 ] 

Rajini Sivaram commented on KAFKA-2421:
---

[~granthenke] Thank you for picking this up.

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Grant Henke
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #165

2015-11-18 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2800; Update outdated dependencies

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 8f6ffe1c28d43bebf1cef7198c1e154a2adc92c3 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 8f6ffe1c28d43bebf1cef7198c1e154a2adc92c3
 > git rev-list 17c6f33126823c4f54ea22ed2284267271b58e21 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson172011764802737177.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 16.598 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson120511471921357638.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_6
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
Download 
https://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.2/snappy-java-1.1.2.pom
Download 
https://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.2/snappy-java-1.1.2.jar
:jar_core_2_10_6 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.548 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Created] (KAFKA-2860) New consumer should handle auto-commit errors more gracefully

2015-11-18 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2860:
--

 Summary: New consumer should handle auto-commit errors more 
gracefully
 Key: KAFKA-2860
 URL: https://issues.apache.org/jira/browse/KAFKA-2860
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Currently errors encountered when doing async commits with autocommit enabled 
end up polluting user logs. This includes both transient commit failures (such 
as disconnects) as well as permanent failures (such as illegal generation). It 
would be nice to handle these errors more gracefully by retrying for transient 
failures and perhaps logging the permanent failures as warnings since the stack 
trace doesn't really help the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2800) Update outdated dependencies

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011636#comment-15011636
 ] 

ASF GitHub Bot commented on KAFKA-2800:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/513


> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2800) Update outdated dependencies

2015-11-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2800:
---
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 513
[https://github.com/apache/kafka/pull/513]

> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2800: Update outdated dependencies

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/513


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Is a Kafka 0.9 broker supposed to connect to itself?

2015-11-18 Thread Damian Guy
This was a brand new cluster, so 0 topics. Every broker had the same issue
and it was all communication with itself. In any case - i deployed a later
cut and it started working.

Cheers,
Damian

On 18 November 2015 at 02:15, Jun Rao  wrote:

> There is inter-broker communication. It seems that the broker got a
> request more than the default allowed size (~10MB). How many
> topic/partitions do you have on this cluster? Do you have clients running
> on the broker host?
>
> Thanks,
>
> Jun
>
>
> On Tue, Nov 17, 2015 at 4:10 AM, Damian Guy  wrote:
>
>> I would think not
>> I'm bringing up a new 0.9 cluster and i'm getting the below Exception (and
>> the same thing on all nodes) - the IP address is the IP for the host the
>> broker is running on. I think DNS is a bit stuffed on these machines and
>> maybe that is the cause, but... any ideas?
>>
>> [2015-11-17 04:01:30,248] WARN Unexpected error from /10.137.231.233;
>> closing connection (org.apache.kafka.common.network.Selector)
>> org.apache.kafka.common.network.InvalidReceiveException: Invalid receive
>> (size = 1195725856 larger than 104857600)
>> at
>>
>> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:91)
>> at
>>
>> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>> at
>>
>> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:160)
>> at
>> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:141)
>> at
>> org.apache.kafka.common.network.Selector.poll(Selector.java:286)
>> at kafka.network.Processor.run(SocketServer.scala:413)
>> at java.lang.Thread.run(Thread.java:745)
>>
>
>


[jira] [Created] (KAFKA-2859) Deadlock in WorkerSourceTask

2015-11-18 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2859:


 Summary: Deadlock in WorkerSourceTask
 Key: KAFKA-2859
 URL: https://issues.apache.org/jira/browse/KAFKA-2859
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Blocker
 Fix For: 0.9.0.0


There is a potential deadlock due to synchronization used around both the 
producer.send() and in the produce callback in WorkerSourceTask. This can be 
triggered, for example, if:

1. WorkerSourceTask work thread is running sendRecords and therefore owns the 
lock on itself, then invokes producer.send() and needs a metadata update 
(sending to a new topic), which causes it to wait on the Metadata class, which 
is notified by the Sender thread when it has updated metadata.
2. Sender thread is processing a message completion, invokes the callback in 
WorkerSourceTask, which then tries to invoke recordSent, which needs to acquire 
the WorkerSourceTask lock. It will wait on this lock and never process the 
metadata update request, so the other thread will never proceed either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2642) Run replication tests in ducktape with SSL for clients

2015-11-18 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011588#comment-15011588
 ] 

Ismael Juma commented on KAFKA-2642:


[~rsivaram], do you think you will still be able to submit a PR this week?

> Run replication tests in ducktape with SSL for clients
> --
>
> Key: KAFKA-2642
> URL: https://issues.apache.org/jira/browse/KAFKA-2642
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Under KAFKA-2581, replication tests were parametrized to run with SSL for 
> interbroker communication, but not for clients. When KAFKA-2603 is committed, 
> the tests should be able to use SSL for clients as well,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2643) Run mirror maker tests in ducktape with SSL

2015-11-18 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011587#comment-15011587
 ] 

Ismael Juma commented on KAFKA-2643:


[~rsivaram], do you think you will still be able to submit a PR this week?

> Run mirror maker tests in ducktape with SSL
> ---
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2858) Clarify usage of `Principal` in the authentication layer

2015-11-18 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2858:
---
Status: Patch Available  (was: Open)

Submitted a PR that shows how this would look.

> Clarify usage of `Principal` in the authentication layer
> 
>
> Key: KAFKA-2858
> URL: https://issues.apache.org/jira/browse/KAFKA-2858
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
>
> We currently use `KafkaPrincipal` at the authentication and authorization 
> layer. But there is an implicit assumption that we always use a 
> `KafkaPrincipal` with principalType == USER_TYPE as we ignore the the 
> principalType of the `KafkaPrincipal` when we create `RequestChannel.Session`.
> I think it would be clearer if we used a separate `Principal` implementation 
> in the authentication layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2800) Update outdated dependencies

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011550#comment-15011550
 ] 

ASF GitHub Bot commented on KAFKA-2800:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/513


> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2800: Update outdated dependencies

2015-11-18 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/513

KAFKA-2800: Update outdated dependencies

Changes:
* org.scala-lang:scala-library [2.10.5 -> 2.10.6]
   * Scala 2.10.6 resolves a license incompatibility in scala.util.Sorting
   * Otherwise identical to Scala 2.10.5
* org.xerial.snappy:snappy-java [1.1.1.7 -> 1.1.2]
   * Fixes SnappyOutputStream.close() is not idempotent
* net.jpountz.lz4:lz4 [1.2.0 -> 1.3]
* junit:junit [4.11 -> 4.12]
* org.easymock:easymock [3.3.1 -> 3.4]
* org.powermock:powermock-api-easymock [1.6.2 -> 1.6.3]
* org.powermock:powermock-module-junit4 [1.6.2 -> 1.6.3]
* org.slf4j:slf4j-api [1.7.6 -> 1.7.12]
* org.slf4j:slf4j-log4j12 [1.7.6 -> 1.7.12]
* com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider [2.5.4 -> 2.6.3]
* com.fasterxml.jackson.core:jackson-databind [2.5.4 -> 2.6.3]
* org.eclipse.jetty:jetty-server [9.2.12.v20150709 -> 9.2.14.v20151106]
* org.eclipse.jetty:jetty-servlet [9.2.12.v20150709 -> 9.2.14.v20151106]
* org.bouncycastle:bcpkix-jdk15on [1.52 -> 1.53]
* net.sf.jopt-simple:jopt-simple [3.2 -> 4.9]
* removed explicit entry for org.objenesis:objenesis:2.2 (resolved 
transitively)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka update-deps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/513.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #513


commit 5cb149d7e823506bf66cefd84fa77260d5c98604
Author: Grant Henke 
Date:   2015-11-12T16:14:23Z

KAFKA-2800: Update outdated dependencies

Changes:
- org.scala-lang:scala-library [2.10.5 -> 2.10.6]
- Scala 2.10.6 resolves a license incompatibility in 
scala.util.Sorting
- Otherwise identical to Scala 2.10.5
- org.xerial.snappy:snappy-java [1.1.1.7 -> 1.1.2]
- Fixes SnappyOutputStream.close() is not idempotent
- net.jpountz.lz4:lz4 [1.2.0 -> 1.3]
- junit:junit [4.11 -> 4.12]
- org.easymock:easymock [3.3.1 -> 3.4]
- org.powermock:powermock-api-easymock [1.6.2 -> 1.6.3]
- org.powermock:powermock-module-junit4 [1.6.2 -> 1.6.3]
- org.slf4j:slf4j-api [1.7.6 -> 1.7.12]
- org.slf4j:slf4j-log4j12 [1.7.6 -> 1.7.12]
- com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider [2.5.4 -> 
2.6.3]
- com.fasterxml.jackson.core:jackson-databind [2.5.4 -> 2.6.3]
- org.eclipse.jetty:jetty-server [9.2.12.v20150709 -> 9.2.14.v20151106]
- org.eclipse.jetty:jetty-servlet [9.2.12.v20150709 -> 9.2.14.v20151106]
- org.bouncycastle:bcpkix-jdk15on [1.52 -> 1.53]
- net.sf.jopt-simple:jopt-simple [3.2 -> 4.9]
- removed explicit entry for org.objenesis:objenesis:2.2 (resolved 
transitively)

commit c459e777e045af6d6d26f964944963dd617e4e6a
Author: Grant Henke 
Date:   2015-11-18T15:56:48Z

Remove lz4 upgrade




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2800) Update outdated dependencies

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011551#comment-15011551
 ] 

ASF GitHub Bot commented on KAFKA-2800:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/513

KAFKA-2800: Update outdated dependencies

Changes:
* org.scala-lang:scala-library [2.10.5 -> 2.10.6]
   * Scala 2.10.6 resolves a license incompatibility in scala.util.Sorting
   * Otherwise identical to Scala 2.10.5
* org.xerial.snappy:snappy-java [1.1.1.7 -> 1.1.2]
   * Fixes SnappyOutputStream.close() is not idempotent
* net.jpountz.lz4:lz4 [1.2.0 -> 1.3]
* junit:junit [4.11 -> 4.12]
* org.easymock:easymock [3.3.1 -> 3.4]
* org.powermock:powermock-api-easymock [1.6.2 -> 1.6.3]
* org.powermock:powermock-module-junit4 [1.6.2 -> 1.6.3]
* org.slf4j:slf4j-api [1.7.6 -> 1.7.12]
* org.slf4j:slf4j-log4j12 [1.7.6 -> 1.7.12]
* com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider [2.5.4 -> 2.6.3]
* com.fasterxml.jackson.core:jackson-databind [2.5.4 -> 2.6.3]
* org.eclipse.jetty:jetty-server [9.2.12.v20150709 -> 9.2.14.v20151106]
* org.eclipse.jetty:jetty-servlet [9.2.12.v20150709 -> 9.2.14.v20151106]
* org.bouncycastle:bcpkix-jdk15on [1.52 -> 1.53]
* net.sf.jopt-simple:jopt-simple [3.2 -> 4.9]
* removed explicit entry for org.objenesis:objenesis:2.2 (resolved 
transitively)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka update-deps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/513.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #513


commit 5cb149d7e823506bf66cefd84fa77260d5c98604
Author: Grant Henke 
Date:   2015-11-12T16:14:23Z

KAFKA-2800: Update outdated dependencies

Changes:
- org.scala-lang:scala-library [2.10.5 -> 2.10.6]
- Scala 2.10.6 resolves a license incompatibility in 
scala.util.Sorting
- Otherwise identical to Scala 2.10.5
- org.xerial.snappy:snappy-java [1.1.1.7 -> 1.1.2]
- Fixes SnappyOutputStream.close() is not idempotent
- net.jpountz.lz4:lz4 [1.2.0 -> 1.3]
- junit:junit [4.11 -> 4.12]
- org.easymock:easymock [3.3.1 -> 3.4]
- org.powermock:powermock-api-easymock [1.6.2 -> 1.6.3]
- org.powermock:powermock-module-junit4 [1.6.2 -> 1.6.3]
- org.slf4j:slf4j-api [1.7.6 -> 1.7.12]
- org.slf4j:slf4j-log4j12 [1.7.6 -> 1.7.12]
- com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider [2.5.4 -> 
2.6.3]
- com.fasterxml.jackson.core:jackson-databind [2.5.4 -> 2.6.3]
- org.eclipse.jetty:jetty-server [9.2.12.v20150709 -> 9.2.14.v20151106]
- org.eclipse.jetty:jetty-servlet [9.2.12.v20150709 -> 9.2.14.v20151106]
- org.bouncycastle:bcpkix-jdk15on [1.52 -> 1.53]
- net.sf.jopt-simple:jopt-simple [3.2 -> 4.9]
- removed explicit entry for org.objenesis:objenesis:2.2 (resolved 
transitively)

commit c459e777e045af6d6d26f964944963dd617e4e6a
Author: Grant Henke 
Date:   2015-11-18T15:56:48Z

Remove lz4 upgrade




> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2800: Update outdated dependencies

2015-11-18 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/513


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011547#comment-15011547
 ] 

ASF GitHub Bot commented on KAFKA-2421:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/552

KAFKA-2421: Upgrade LZ4 to version 1.3

A few notes on the added test:
 * I verified this test fails when changing between snappy 1.1.1.2 and 
1.1.1.7 (per KAFKA-2189)
 * The hard coded numbers are passing before and after lzo change

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka lz4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/552.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #552


commit 0d6eb9948c18cdabdadcb7baa67c59fe0c8c51e2
Author: Grant Henke 
Date:   2015-11-18T17:58:09Z

KAFKA-2421: Upgrade LZ4 to version 1.3

A few notes on the added test:
 * I verified this test fails when changing between snappy 1.1.1.2 and 
1.1.1.7 (per KAFKA-2189)
 * The hard coded numbers are passing before and after lzo change




> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Grant Henke
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> 

[GitHub] kafka pull request: KAFKA-2421: Upgrade LZ4 to version 1.3

2015-11-18 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/552

KAFKA-2421: Upgrade LZ4 to version 1.3

A few notes on the added test:
 * I verified this test fails when changing between snappy 1.1.1.2 and 
1.1.1.7 (per KAFKA-2189)
 * The hard coded numbers are passing before and after lzo change

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka lz4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/552.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #552


commit 0d6eb9948c18cdabdadcb7baa67c59fe0c8c51e2
Author: Grant Henke 
Date:   2015-11-18T17:58:09Z

KAFKA-2421: Upgrade LZ4 to version 1.3

A few notes on the added test:
 * I verified this test fails when changing between snappy 1.1.1.2 and 
1.1.1.7 (per KAFKA-2189)
 * The hard coded numbers are passing before and after lzo change




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2858; Introduce `SimplePrincipal` and us...

2015-11-18 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/551

KAFKA-2858; Introduce `SimplePrincipal` and use it in the authentication 
layer

This makes it clear that we only support a principal name at the
authentication layer (principalType is only used at the authorization
layer).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2858-clarify-usage-of-principal

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/551.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #551


commit 8b2715441b444a435cf6cdf3aa641d1ddcbf36f1
Author: Ismael Juma 
Date:   2015-11-18T17:41:53Z

Introduce `SimplePrincipal` and use it in the authentication layer

This makes it clear that we only support a principal name at the
authentication layer (principalType is only used at the authorization
layer).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2858) Clarify usage of `Principal` in the authentication layer

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011539#comment-15011539
 ] 

ASF GitHub Bot commented on KAFKA-2858:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/551

KAFKA-2858; Introduce `SimplePrincipal` and use it in the authentication 
layer

This makes it clear that we only support a principal name at the
authentication layer (principalType is only used at the authorization
layer).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2858-clarify-usage-of-principal

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/551.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #551


commit 8b2715441b444a435cf6cdf3aa641d1ddcbf36f1
Author: Ismael Juma 
Date:   2015-11-18T17:41:53Z

Introduce `SimplePrincipal` and use it in the authentication layer

This makes it clear that we only support a principal name at the
authentication layer (principalType is only used at the authorization
layer).




> Clarify usage of `Principal` in the authentication layer
> 
>
> Key: KAFKA-2858
> URL: https://issues.apache.org/jira/browse/KAFKA-2858
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
>
> We currently use `KafkaPrincipal` at the authentication and authorization 
> layer. But there is an implicit assumption that we always use a 
> `KafkaPrincipal` with principalType == USER_TYPE as we ignore the the 
> principalType of the `KafkaPrincipal` when we create `RequestChannel.Session`.
> I think it would be clearer if we used a separate `Principal` implementation 
> in the authentication layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-11-18 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2421:
--

Assignee: Grant Henke  (was: Rajini Sivaram)

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Grant Henke
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-11-18 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011532#comment-15011532
 ] 

Grant Henke commented on KAFKA-2421:


[~rsivaram] I am going to send a PR for this today. I hope thats okay.

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Grant Henke
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2858) Clarify usage of `Principal` in the authentication layer

2015-11-18 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2858:
---
Summary: Clarify usage of `Principal` in the authentication layer  (was: 
Clarify usage of `Principal` at the authentication layer)

> Clarify usage of `Principal` in the authentication layer
> 
>
> Key: KAFKA-2858
> URL: https://issues.apache.org/jira/browse/KAFKA-2858
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
>
> We currently use `KafkaPrincipal` at the authentication and authorization 
> layer. But there is an implicit assumption that we always use a 
> `KafkaPrincipal` with principalType == USER_TYPE as we ignore the the 
> principalType of the `KafkaPrincipal` when we create `RequestChannel.Session`.
> I think it would be clearer if we used a separate `Principal` implementation 
> in the authentication layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2858) Clarify usage of `Principal` at the authentication layer

2015-11-18 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2858:
--

 Summary: Clarify usage of `Principal` at the authentication layer
 Key: KAFKA-2858
 URL: https://issues.apache.org/jira/browse/KAFKA-2858
 Project: Kafka
  Issue Type: Improvement
  Components: security
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor


We currently use `KafkaPrincipal` at the authentication and authorization 
layer. But there is an implicit assumption that we always use a 
`KafkaPrincipal` with principalType == USER_TYPE as we ignore the the 
principalType of the `KafkaPrincipal` when we create `RequestChannel.Session`.

I think it would be clearer if we used a separate `Principal` implementation in 
the authentication layer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2857) ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when describing a non-existent group before the offset topic is created

2015-11-18 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011425#comment-15011425
 ] 

Ismael Juma commented on KAFKA-2857:


cc [~junrao] [~hachikuji]

> ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when 
> describing a non-existent group before the offset topic is created
> -
>
> Key: KAFKA-2857
> URL: https://issues.apache.org/jira/browse/KAFKA-2857
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ismael Juma
>Priority: Minor
>
> If we describe a non-existing group before the offset topic is created, like 
> the following:
> {code}
> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer 
> --describe --group 
> {code}
> We get the following error:
> {code}
> Error while executing consumer group command The group coordinator is not 
> available.
> org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The 
> group coordinator is not available.
> {code}
> The exception is thrown in the `adminClient.describeConsumerGroup` call. We 
> can't interpret this exception as meaning that the group doesn't exist 
> because it could also be thrown f all replicas for a offset topic partition 
> are down (as explained by Jun).
> Jun also suggested that we should distinguish if a coordinator is not 
> available from the case where a coordinator doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2857) ConsumerGroupCommand throws GroupCoordinatorNotAvailableException when describing a non-existent group before the offset topic is created

2015-11-18 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2857:
--

 Summary: ConsumerGroupCommand throws 
GroupCoordinatorNotAvailableException when describing a non-existent group 
before the offset topic is created
 Key: KAFKA-2857
 URL: https://issues.apache.org/jira/browse/KAFKA-2857
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Ismael Juma
Priority: Minor


If we describe a non-existing group before the offset topic is created, like 
the following:

{code}
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer 
--describe --group 
{code}

We get the following error:

{code}
Error while executing consumer group command The group coordinator is not 
available.
org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException: The group 
coordinator is not available.
{code}

The exception is thrown in the `adminClient.describeConsumerGroup` call. We 
can't interpret this exception as meaning that the group doesn't exist because 
it could also be thrown f all replicas for a offset topic partition are down 
(as explained by Jun).

Jun also suggested that we should distinguish if a coordinator is not available 
from the case where a coordinator doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-11-18 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011263#comment-15011263
 ] 

Grant Henke commented on KAFKA-2421:


[~rsivaram] Do you mind if I finish this up? I would like to add a few unit 
tests related to compression size and get this merged. Its related to 
KAFKA-2800 which I have been working on.

> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2771) Add SSL Rolling Upgrade Test to System Tests

2015-11-18 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2771:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> Add SSL Rolling Upgrade Test to System Tests
> 
>
> Key: KAFKA-2771
> URL: https://issues.apache.org/jira/browse/KAFKA-2771
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Ensure we can perform a rolling upgrade to enable SSL on a running cluster
> *Method*
> - Start with 0.9.0 cluster with SSL disabled
> - Upgrade to Client and Inter-Broker ports to SSL (This will take two rounds 
> bounces. One to open the SSL port and one to close the PLAINTEXT port)
> - Ensure you can produce  (acks = -1) and consume during the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2772) Stabilize replication hard bounce test

2015-11-18 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2772:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> Stabilize replication hard bounce test
> --
>
> Key: KAFKA-2772
> URL: https://issues.apache.org/jira/browse/KAFKA-2772
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Minor
>
> There have been several spurious failures of replication tests during runs of 
> kafka system tests (see for example 
> http://testing.confluent.io/kafka/2015-11-07--001/)
> {code:title=report.txt}
> Expected producer to still be producing.
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in test_replication_with_broker_failure
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 65, in run_produce_consume_validate
> self.stop_producer_and_consumer()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 55, in stop_producer_and_consumer
> err_msg="Expected producer to still be producing.")
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: Expected producer to still be producing.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2827) Producer metadata stall

2015-11-18 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2827:
---
Affects Version/s: (was: 0.9.0.1)
   0.9.0.0

> Producer metadata stall
> ---
>
> Key: KAFKA-2827
> URL: https://issues.apache.org/jira/browse/KAFKA-2827
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>
> This is a tracking issue for a suspicious behavior that has been plaguing the 
> stability of certain system tests.
> Producer metadata requests and produce requests can be blocked for "quite a 
> while" (tens of seconds to minutes) after a hard bounce of a broker has 
> occurred despite the fact that new leaders for for the topicPartitions on the 
> bounced broker have already been reelected.
> This shows up in particular during ducktape replication tests where the 
> leader of a topic is bounced with a hard kill. The initial symptom was the 
> console consumer timing out due to no messages being available for 
> consumption during this lengthy producer "stall" period.
> This behavior does not appear to be present in kafka 0.8.2: 15 of 15 runs of 
> the test in question passed with 0.8.2 substituted for trunk.
> [~benstopford]'s findings so far:
> Theory:
> The first node dies, Node1. A new controller is elected and it correctly 
> reassigns the partition leaders to be the other two nodes (Node2, Node3). 
> However, for some reason, Node1 remains in the ISR list for two of the three 
> partitions: 
> Broker 3 cached leader info (Leader:3,ISR:3,2, AllReplicas:1,3,2) for 
> partition test_topic,0
> Broker 3 cached leader info (Leader:3,ISR:3,2,1, AllReplicas:3,2,1) for 
> partition test_topic,2
> Broker 3 cached leader info (Leader:2,ISR:2,1,3, AllReplicas:2,1,3) for 
> partition test_topic,1
> (a little later broker 1 is dropped from the ISR for partition 2, partition 3 
> keeps all three replicas throughout)
> Meanwhile the producer is sending requests with acks=-1. These block on the 
> server because the ISR for partitions 1,2 still contain the dead Node1. As an 
> aside, metadata requests from the producer are blocked too during this period 
> (they are present if you look in the producer’s request queue).
> The consumer times out as it hasn’t consumed any data for its timeout period 
> (this is a timeout in the console_consumer). This is because no data is 
> produced during this period, as the producer is blocked. (The producer is 
> blocked usually for around 15 seconds but this varies from run to run).
> When Node1 is restarted everything clears as the fetchers from Node1 can 
> replicate data again. The Producer is updated with metadata and continues 
> producing. All is back to normal.
> The next node goes down and the whole circle loops round again.
> Unanswered questions: 
> I don’t know why we end up in a situation where the ISR is empty. isr=[]. 
> What I do know is this always coincides with it being in a 
> REPLICA_NOT_AVAILABLE or LEADER_NOT_AVAILABLE error state 
> (partition_error_code=5/9).
> To Do:
> Dig into why the dead node is not removed from the ISR
> To get the test running for now one option would be to wait for the isr to 
> return to isr=1,2,3 after each node is restarted.
> Sample Timeline
> 18:45:59,606 - consumer gets first disconnection error (n1 is dead)
> 18:45:59,500 - producer gets first disconnection error (n1 is dead)
> 18:46:06,001 - ZK times out node1
> 18:46:06,321 - Broker 3 (Controller) Selects new leader and ISR
> {"leader":3,"leader_epoch":1,"isr":[3,2]}
> test_topic,0
> => so it moves the leader but it doesn’t change the ISR on the other two 
> 18:46:06,168 - Broker 2 is updated by Broker 3 with ISR (leaders exclude n1 
> BUT ISR still names all 3?)
> 18:46:09,582 - Consumer gives up at 10s timeout
> 18:46:14,866 - Controller is notified by ZK that node 1 is back.
> 18:46:15,845 - metadata update hits the producer (my guess is this is simply 
> because node 1 is now available so replication can continue and release held 
> up produce requests).
> 18:46:16,922 - producer disconnects from node 3 (node 3 dead)
> If we need a workaround for the tests we can simply set 
> replica.lag.time.max.ms to a lower value, like 1000ms (so far untested)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Documentation improvements

2015-11-18 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/550

MINOR: Documentation improvements

* Fix typo in api.html
* Mark security features as beta quality (similar to new consumer). Is 
there better wording?
* Improve wording and clarify things in a number of places
* Improve layout of  blocks (tested locally, which doesn't seem to use 
the same stylesheets as the deployed version)
* Use producer.config in console-producer.sh command
* Improve SASL documentation structure

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka documentation-improvements

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/550.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #550


commit e980ae105003d015078f62700fb21a7e7a70bbad
Author: Ismael Juma 
Date:   2015-11-17T18:02:06Z

Fix typo where `producer` is used instead of `consumer`

commit 0e53d8977e5672a4ef0de40782aed8c4bd02a9d0
Author: Ismael Juma 
Date:   2015-11-18T09:10:44Z

Improve security documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2561) Optionally support OpenSSL for SSL/TLS

2015-11-18 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15010649#comment-15010649
 ] 

Ismael Juma commented on KAFKA-2561:


Encryption speed improvements in JDK 9 (note the following doesn't include the 
`SSLEngine` overhead, which is probably still significant):

{quote}
JDK 9: up to a 62x performance gain over the JDK 8 GA implementation
• up to 5.45x over 8u60 implementation
• 8u60 performance improved due to:
• https://bugs.openjdk.java.net/browse/JDK-8069072
{quote}

https://blogs.oracle.com/mullan/entry/slides_for_javaone_2015_session

> Optionally support OpenSSL for SSL/TLS 
> ---
>
> Key: KAFKA-2561
> URL: https://issues.apache.org/jira/browse/KAFKA-2561
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>
> JDK's `SSLEngine` is unfortunately a bit slow (KAFKA-2431 covers this in more 
> detail). We should consider supporting OpenSSL for SSL/TLS. Initial 
> experiments on my laptop show that it performs a lot better:
> {code}
> start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, 
> nMsg.sec, config
> 2015-09-21 14:41:58:245, 2015-09-21 14:47:02:583, 28610.2295, 94.0081, 
> 3000, 98574.6111, Java 8u60/server auth JDK 
> SSLEngine/TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
> 2015-09-21 14:38:24:526, 2015-09-21 14:40:19:941, 28610.2295, 247.8900, 
> 3000, 259931.5514, Java 8u60/server auth 
> OpenSslEngine/TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
> 2015-09-21 14:49:03:062, 2015-09-21 14:50:27:764, 28610.2295, 337.7751, 
> 3000, 354182.9000, Java 8u60/plaintext
> {code}
> Extracting the throughput figures:
> * JDK SSLEngine: 94 MB/s
> * OpenSSL SSLEngine: 247 MB/s
> * Plaintext: 337 MB/s (code from trunk, so no zero-copy due to KAFKA-2517)
> In order to get these figures, I used Netty's `OpenSslEngine` by hacking 
> `SSLFactory` to use Netty's `SslContextBuilder` and made a few changes to 
> `SSLTransportLayer` in order to workaround differences in behaviour between 
> `OpenSslEngine` and JDK's SSLEngine (filed 
> https://github.com/netty/netty/issues/4235 and 
> https://github.com/netty/netty/issues/4238 upstream).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2561) Optionally support OpenSSL for SSL/TLS

2015-11-18 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15010649#comment-15010649
 ] 

Ismael Juma edited comment on KAFKA-2561 at 11/18/15 9:50 AM:
--

Encryption speed improvements in JDK 9 (note the following doesn't include the 
`SSLEngine` overhead, which is probably still significant):

{quote}
• JDK 9: up to a 62x performance gain over the JDK 8 GA implementation
• up to 5.45x over 8u60 implementation
• 8u60 performance improved due to 
https://bugs.openjdk.java.net/browse/JDK-8069072
{quote}

https://blogs.oracle.com/mullan/entry/slides_for_javaone_2015_session


was (Author: ijuma):
Encryption speed improvements in JDK 9 (note the following doesn't include the 
`SSLEngine` overhead, which is probably still significant):

{quote}
JDK 9: up to a 62x performance gain over the JDK 8 GA implementation
• up to 5.45x over 8u60 implementation
• 8u60 performance improved due to:
• https://bugs.openjdk.java.net/browse/JDK-8069072
{quote}

https://blogs.oracle.com/mullan/entry/slides_for_javaone_2015_session

> Optionally support OpenSSL for SSL/TLS 
> ---
>
> Key: KAFKA-2561
> URL: https://issues.apache.org/jira/browse/KAFKA-2561
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>
> JDK's `SSLEngine` is unfortunately a bit slow (KAFKA-2431 covers this in more 
> detail). We should consider supporting OpenSSL for SSL/TLS. Initial 
> experiments on my laptop show that it performs a lot better:
> {code}
> start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, 
> nMsg.sec, config
> 2015-09-21 14:41:58:245, 2015-09-21 14:47:02:583, 28610.2295, 94.0081, 
> 3000, 98574.6111, Java 8u60/server auth JDK 
> SSLEngine/TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
> 2015-09-21 14:38:24:526, 2015-09-21 14:40:19:941, 28610.2295, 247.8900, 
> 3000, 259931.5514, Java 8u60/server auth 
> OpenSslEngine/TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
> 2015-09-21 14:49:03:062, 2015-09-21 14:50:27:764, 28610.2295, 337.7751, 
> 3000, 354182.9000, Java 8u60/plaintext
> {code}
> Extracting the throughput figures:
> * JDK SSLEngine: 94 MB/s
> * OpenSSL SSLEngine: 247 MB/s
> * Plaintext: 337 MB/s (code from trunk, so no zero-copy due to KAFKA-2517)
> In order to get these figures, I used Netty's `OpenSslEngine` by hacking 
> `SSLFactory` to use Netty's `SslContextBuilder` and made a few changes to 
> `SSLTransportLayer` in order to workaround differences in behaviour between 
> `OpenSslEngine` and JDK's SSLEngine (filed 
> https://github.com/netty/netty/issues/4235 and 
> https://github.com/netty/netty/issues/4238 upstream).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Update to Gradle 2.9 and update generated `gra...

2015-11-18 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/549

Update to Gradle 2.9 and update generated `gradlew` file

More performance improvements:

"In many cases, Gradle 2.9 is much faster than Gradle 2.8 when performing 
incremental builds.

Very large builds (many thousands of source files) could see incremental 
build speeds up to 80% faster than 2.7 and up to 40% faster than 2.8.

Gradle now uses a more efficient mechanism to scan the filesystem, making 
up-to-date checks significantly faster. This improvement is only available when 
running Gradle with Java 7 or newer.

Other improvements have been made to speed-up include and exclude pattern 
evaluation; these improvements apply to all supported Java versions.

Gradle now uses much less memory than previous releases when performing 
incremental builds. By de-duplicating Strings used as file paths in internal 
caches, and by reducing the overhead of listing classes under test for Java 
projects, some builds use 30-70% less memory that Gradle 2.8."

https://docs.gradle.org/current/release-notes

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka gradle-2.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/549.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #549


commit 096a99b699457f5f0cfc55fee02f7a34b727456e
Author: Ismael Juma 
Date:   2015-11-18T09:14:00Z

Update to Gradle 2.9 and update generated `gradlew` file




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---