Build failed in Jenkins: kafka-trunk-jdk7 #1057

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: catch a commit failure due to rebalance in StreamThread

--
[...truncated 1486 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponse

[jira] [Commented] (KAFKA-3263) Add Markdown support for ConfigDef

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15158412#comment-15158412
 ] 

ASF GitHub Bot commented on KAFKA-3263:
---

GitHub user jcustenborder opened a pull request:

https://github.com/apache/kafka/pull/952

KAFKA-3263 - Support for markdown generation.

Added support to generate markdown from ConfigDef entries. Added test 
toMarkdown() to ConfigDefTest. Added toMarkdown() to ConfigDef.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcustenborder/kafka KAFKA-3263

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/952.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #952


commit f2f91fadfe5a4fb3647b0d521cae967129651967
Author: Jeremy Custenborder 
Date:   2016-02-23T06:51:54Z

KAFKA-3263 - Added support to generate markdown from ConfigDef entries. 
Added test toMarkdown() to ConfigDefTest. Added toMarkdown() to ConfigDef.




> Add Markdown support for ConfigDef
> --
>
> Key: KAFKA-3263
> URL: https://issues.apache.org/jira/browse/KAFKA-3263
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Jeremy Custenborder
>Priority: Minor
>
> The ability to output markdown for ConfigDef would be nice given a lot of 
> people use README.md files in their repositories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3263 - Support for markdown generation.

2016-02-22 Thread jcustenborder
GitHub user jcustenborder opened a pull request:

https://github.com/apache/kafka/pull/952

KAFKA-3263 - Support for markdown generation.

Added support to generate markdown from ConfigDef entries. Added test 
toMarkdown() to ConfigDefTest. Added toMarkdown() to ConfigDef.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcustenborder/kafka KAFKA-3263

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/952.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #952


commit f2f91fadfe5a4fb3647b0d521cae967129651967
Author: Jeremy Custenborder 
Date:   2016-02-23T06:51:54Z

KAFKA-3263 - Added support to generate markdown from ConfigDef entries. 
Added test toMarkdown() to ConfigDefTest. Added toMarkdown() to ConfigDef.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3263) Add Markdown support for ConfigDef

2016-02-22 Thread Jeremy Custenborder (JIRA)
Jeremy Custenborder created KAFKA-3263:
--

 Summary: Add Markdown support for ConfigDef
 Key: KAFKA-3263
 URL: https://issues.apache.org/jira/browse/KAFKA-3263
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.9.0.1
Reporter: Jeremy Custenborder
Priority: Minor


The ability to output markdown for ConfigDef would be nice given a lot of 
people use README.md files in their repositories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-22 Thread Manikumar Reddy
+1

It will be great If we can completely close below issue.

https://issues.apache.org/jira/browse/KAFKA-1843


On Tue, Feb 23, 2016 at 3:25 AM, Joel Koshy  wrote:

> +1
>
> Thanks for bringing it up
>
> On Mon, Feb 22, 2016 at 9:36 AM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > The new Java producer was introduced in 0.8.2.0 (released in February
> > 2015). It has become the default implementation for various tools since
> > 0.9.0.0 (released in October 2015) and it is the only implementation with
> > support for the security features introduced in 0.9.0.0.
> >
> > Given this, I think there's a good argument for deprecating the old Scala
> > producers for the next release (which is likely to be 0.10.0.0). This
> would
> > give our users a stronger signal regarding our plans to focus on the new
> > Java producer going forward.
> >
> > Note that this proposal is only about deprecating the old Scala producers
> > as, in my opinion, it is too early to do the same for the old Scala
> > consumers.
> >
> > Thoughts?
> >
> > Ismael
> >
>


[GitHub] kafka pull request: MINOR: catch a commit failure due to rebalance

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/933


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: check offset limits in streamtask when...

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/947


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


new producer failed with org.apache.kafka.common.errors.TimeoutException

2016-02-22 Thread Kris K
Hi All,

I saw an issue today wherein the producers (new producers) started to fail
with org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 6 ms.

This issue happened when we took down one of the 6 brokers (running version
0.8.2.1) for planned maintenance (graceful shutdown).

This broker happens to be the last one in the list of 3 brokers that are
part of bootstrap.servers.

As per my understanding, the producers should have used the other two
brokers in the bootstrap.servers list for metadata calls. But this did not
happen.

Is there any producer property that could have caused this? Any way to
figure out which broker is being used by producers for metadata calls?

Thanks,
Kris


Build failed in Jenkins: kafka-trunk-jdk7 #1055

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] MINOR - remove unused imports in package kafka.utils

--
[...truncated 1474 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > test

Build failed in Jenkins: kafka-trunk-jdk8 #383

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[jjkoshy] MINOR - remove unused imports in package kafka.utils

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-us1 (Ubuntu ubuntu ubuntu-us) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 04585d99c6843a4253bae8ee958e360dd734d10e 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 04585d99c6843a4253bae8ee958e360dd734d10e
 > git rev-list d142f8294af67fea20d77dcc5272770af153c0d9 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4877196870950820450.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 30.546 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1600530437746644902.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.11/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DE

[GitHub] kafka pull request: MINOR - remove unused imports in package kafka...

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/935


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #382

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3256; Add print.timestamp option to console consumer.

--
[...truncated 5633 lines...]
org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSendRecordsRetries 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSourceConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSinkConnector PASSED

org.apache.kafka.

Jenkins build is back to normal : kafka-trunk-jdk7 #1054

2016-02-22 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-3262) Make KafkaStreams debugging friendly

2016-02-22 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-3262:

Description: 
Current KafkaStreams polls records in the same thread as the data processing 
thread. This makes debugging user code, as well as KafkaStreams itself, 
difficult. When the thread is suspended by the debugger, the next heartbeat of 
the consumer tie to the thread won't be send until the thread is resumed. This 
often results in missed heartbeats and causes a group rebalance. So it may will 
be a completely different context then the thread hits the break point the next 
time.
We should consider using separate threads for polling and processing.

> Make KafkaStreams debugging friendly
> 
>
> Key: KAFKA-3262
> URL: https://issues.apache.org/jira/browse/KAFKA-3262
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>
> Current KafkaStreams polls records in the same thread as the data processing 
> thread. This makes debugging user code, as well as KafkaStreams itself, 
> difficult. When the thread is suspended by the debugger, the next heartbeat 
> of the consumer tie to the thread won't be send until the thread is resumed. 
> This often results in missed heartbeats and causes a group rebalance. So it 
> may will be a completely different context then the thread hits the break 
> point the next time.
> We should consider using separate threads for polling and processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3262) Make KafkaStreams debugging friendly

2016-02-22 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-3262:
---

 Summary: Make KafkaStreams debugging friendly
 Key: KAFKA-3262
 URL: https://issues.apache.org/jira/browse/KAFKA-3262
 Project: Kafka
  Issue Type: Sub-task
  Components: kafka streams
Affects Versions: 0.9.1.0
Reporter: Yasuhiro Matsuda






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3018) Kafka producer hangs on producer.close() call if the producer topic contains single quotes in the topic name

2016-02-22 Thread Chi Hoang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15158019#comment-15158019
 ] 

Chi Hoang commented on KAFKA-3018:
--

I encountered this, if not a similar, problem.  I unknowingly had double-quotes 
in the topic name and I couldn't get topic metadata from the cluster because 
the server tried the validate the topic and determined it was not valid.  The 
message the server returned was InvalidTopicException, but the producer logs 
only showed INVALID_TOPIC_EXCEPTION - "The request attempted to perform an 
operation on an invalid topic."  I had a lot of trouble tracking this down.

This could be be improved by failing early adding a topic name validation in 
the ProducerRecord instantiation.

I ran this against trunk and 0.9.0.0 to confirm that the behavior is the same 
in each.

> Kafka producer hangs on producer.close() call if the producer topic contains 
> single quotes in the topic name
> 
>
> Key: KAFKA-3018
> URL: https://issues.apache.org/jira/browse/KAFKA-3018
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: kanav anand
>Assignee: Jun Rao
>
> While creating topics with quotes in the name throws a exception but if you 
> try to close a producer configured with a topic name with quotes the producer 
> hangs.
> It can be easily replicated and verified by setting topic.name for a producer 
> with a string containing single quotes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3201) Add system test for KIP-31 and KIP-32 - Upgrade Test

2016-02-22 Thread Anna Povzner (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3201 started by Anna Povzner.
---
> Add system test for KIP-31 and KIP-32 - Upgrade Test
> 
>
> Key: KAFKA-3201
> URL: https://issues.apache.org/jira/browse/KAFKA-3201
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> This system test should test the procedure to upgrade a Kafka broker from 
> 0.8.x and 0.9.0 to 0.10.0
> The procedure is documented in KIP-32:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-02-22 Thread chen zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157976#comment-15157976
 ] 

chen zhu commented on KAFKA-3261:
-

[~guozhang] Thanks for creating the patch. I will finish this one as follow-up 
patch of KAFKA-2757.

> Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint
> -
>
> Key: KAFKA-3261
> URL: https://issues.apache.org/jira/browse/KAFKA-3261
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: chen zhu
>
> These two classes are serving similar purposes and can be consolidated. Also 
> as [~sasakitoa] suggested we can remove their "uriParseExp" variables but use 
> (a possibly modified)
> {code}
> private static final Pattern HOST_PORT_PATTERN = 
> Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
> {code}
> in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-02-22 Thread chen zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chen zhu updated KAFKA-3261:

Comment: was deleted

(was: @guozhang Thanks for creating the patch. I will finish this one as 
follow-up patch of KAFKA-2757.)

> Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint
> -
>
> Key: KAFKA-3261
> URL: https://issues.apache.org/jira/browse/KAFKA-3261
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: chen zhu
>
> These two classes are serving similar purposes and can be consolidated. Also 
> as [~sasakitoa] suggested we can remove their "uriParseExp" variables but use 
> (a possibly modified)
> {code}
> private static final Pattern HOST_PORT_PATTERN = 
> Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
> {code}
> in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-02-22 Thread chen zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157974#comment-15157974
 ] 

chen zhu commented on KAFKA-3261:
-

@guozhang Thanks for creating the patch. I will finish this one as follow-up 
patch of KAFKA-2757.

> Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint
> -
>
> Key: KAFKA-3261
> URL: https://issues.apache.org/jira/browse/KAFKA-3261
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: chen zhu
>
> These two classes are serving similar purposes and can be consolidated. Also 
> as [~sasakitoa] suggested we can remove their "uriParseExp" variables but use 
> (a possibly modified)
> {code}
> private static final Pattern HOST_PORT_PATTERN = 
> Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
> {code}
> in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-02-22 Thread chen zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chen zhu reassigned KAFKA-3261:
---

Assignee: chen zhu

> Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint
> -
>
> Key: KAFKA-3261
> URL: https://issues.apache.org/jira/browse/KAFKA-3261
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: chen zhu
>
> These two classes are serving similar purposes and can be consolidated. Also 
> as [~sasakitoa] suggested we can remove their "uriParseExp" variables but use 
> (a possibly modified)
> {code}
> private static final Pattern HOST_PORT_PATTERN = 
> Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
> {code}
> in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157972#comment-15157972
 ] 

ASF GitHub Bot commented on KAFKA-3256:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/949


> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3256: Add print.timestamp option to cons...

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/949


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3256:
---
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 949
[https://github.com/apache/kafka/pull/949]

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3196) KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157965#comment-15157965
 ] 

ASF GitHub Bot commented on KAFKA-3196:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/951

KAFKA-3196: Added checksum and size to RecordMetadata and 
ConsumerRecordetadata and ConsumerRecord

This is the second (remaining) part of KIP-42. See 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3196

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/951.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #951


commit ce10691e621a74070243c16dc8c0aa5ada531c72
Author: Anna Povzner 
Date:   2016-02-22T23:49:31Z

KAFKA-3196: KIP-42 (part 2) Added checksum and record size to 
RecordMetadata and ConsumerRecord




> KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords
> --
>
> Key: KAFKA-3196
> URL: https://issues.apache.org/jira/browse/KAFKA-3196
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> This is the second (smaller) part of KIP-42, which includes: Add record size 
> and CRC to RecordMetadata and ConsumerRecord.
> See details in KIP-42 wiki: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3196: Added checksum and size to RecordM...

2016-02-22 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/951

KAFKA-3196: Added checksum and size to RecordMetadata and 
ConsumerRecordetadata and ConsumerRecord

This is the second (remaining) part of KIP-42. See 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3196

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/951.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #951


commit ce10691e621a74070243c16dc8c0aa5ada531c72
Author: Anna Povzner 
Date:   2016-02-22T23:49:31Z

KAFKA-3196: KIP-42 (part 2) Added checksum and record size to 
RecordMetadata and ConsumerRecord




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1053

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Update streams RocksDb to 4.1.0

[wangguoz] HOTFIX: make sure to go through all shutdown steps

--
[...truncated 2394 lines...]

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:941)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:380)
at hudson.scm.SCM.poll(SCM.java:397)
at hudson.model.AbstractProject._poll(AbstractProject.java:1450)
at hudson.model.AbstractProject.poll(AbstractProject.java:1353)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:510)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:539)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: Could not install JDK_1_7U51_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:941)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:380)
at hudson.scm.SCM.poll(SCM.java:397)
at hudson.model.AbstractProject._poll(AbstractProject.java:1450)
at hudson.model.AbstractProject.poll(AbstractProject.java:1353)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:510)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:539)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
java.lang.AssertionError: Did not get valid assignment for partitions 
[topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, topic1-3, 
topic1-1, topic2-2] after we changed subscription
at org.junit.Assert.fail(Assert.java:88)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:746)
at 
kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:872)
at 
kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:891)
at 
kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextCons

Re: [DISCUSS] KIP-47 - Add timestamp-based log deletion policy

2016-02-22 Thread Bill Warshaw
Sounds good.  I'll hold off on sending out a VOTE thread until after the
KIP meeting tomorrow.

On Mon, Feb 22, 2016 at 12:56 PM, Becket Qin  wrote:

> Hi Jun,
>
> I think it makes sense to implement KIP-47 after KIP-33 so we can make it
> work for both LogAppendTime and CreateTime.
>
> And yes, I'm actively working on KIP-33. I had a voting thread on KIP-33
> before and I'll bump it up.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
> On Mon, Feb 22, 2016 at 9:11 AM, Jun Rao  wrote:
>
> > Becket,
> >
> > Since you submitted KIP-33, are you actively working on that? If so, it
> > would make sense to implement KIP-47 after KIP-33 so that it works for
> both
> > CreateTime and LogAppendTime.
> >
> > Thanks,
> >
> > Jun
> >
> >
> >
> >
> > On Fri, Feb 19, 2016 at 6:25 PM, Bill Warshaw 
> wrote:
> >
> > > Hi Jun,
> > >
> > > 1.  I thought more about Andrew's comment about LogAppendTime.  The
> > > time-based index you are referring to is associated with KIP-33,
> correct?
> > > Currently my implementation is just checking the last message in a
> > segment,
> > > so we're restricted to LogAppendTime.  When the work for KIP-33 is
> > > completed, it sounds like CreateTime would also be valid.  Do you
> happen
> > to
> > > know if anyone is currently working on KIP-33?
> > >
> > > 2. I did update the wiki after reading your original comment, but
> reading
> > > over it again I realize I could word a couple things more clearly.  I
> > will
> > > do that tonight.
> > >
> > > Bill
> > >
> > > On Fri, Feb 19, 2016 at 7:02 PM, Jun Rao  wrote:
> > >
> > > > Hi, Bill,
> > > >
> > > > I replied with the following comments earlier to the thread. Did you
> > see
> > > > that?
> > > >
> > > > Thanks for the proposal. A couple of comments.
> > > >
> > > > 1. It seems that this new policy should work for CreateTime as well.
> > If a
> > > > topic is configured with CreateTime, messages may not be added in
> > strict
> > > > order in the log. However, to build a time-based index, we will be
> > > > maintaining the largest timestamp for all messages in a log segment.
> We
> > > can
> > > > delete a segment if its largest timestamp is less than
> > > > log.retention.min.timestamp. This guarantees that no messages newer
> > than
> > > > log.retention.min.timestamp will be deleted, which is probably what
> the
> > > > user wants.
> > > >
> > > > 2. Right now, the user can specify "delete" as the retention policy
> > and a
> > > > log segment will be deleted either when the size of a partition
> > exceeds a
> > > > threshold or the timestamp of a segment is older than a relative
> period
> > > of
> > > > time (say 7 days) from now. What you are proposing is not a new
> > retention
> > > > policy, but an additional check that will cause a segment to be
> deleted
> > > > when the timestamp of a segment is older than an absolute timestamp?
> If
> > > so,
> > > > could you update the wiki accordingly?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Fri, Feb 19, 2016 at 2:57 PM, Bill Warshaw 
> > > wrote:
> > > >
> > > > > Hello all,
> > > > >
> > > > > What is the next step with this proposal?  The work for KIP-32 that
> > it
> > > > was
> > > > > based off merged earlier today (
> > > https://github.com/apache/kafka/pull/764
> > > > ,
> > > > > thank you Becket).  I have an implementation with tests, and I've
> > > > confirmed
> > > > > that it actually works in a live system.  Is there more discussion
> > that
> > > > > needs to be had about this KIP, or should I start a VOTE thread?
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Feb 16, 2016 at 5:06 PM, Jun Rao  wrote:
> > > > >
> > > > > > Bill,
> > > > > >
> > > > > > Thanks for the proposal. A couple of comments.
> > > > > >
> > > > > > 1. It seems that this new policy should work for CreateTime as
> > well.
> > > > If a
> > > > > > topic is configured with CreateTime, messages may not be added in
> > > > strict
> > > > > > order in the log. However, to build a time-based index, we will
> be
> > > > > > maintaining the largest timestamp for all messages in a log
> > segment.
> > > We
> > > > > can
> > > > > > delete a segment if its largest timestamp is less than
> > > > > > log.retention.min.timestamp. This guarantees that no messages
> newer
> > > > than
> > > > > > log.retention.min.timestamp will be deleted, which is probably
> what
> > > the
> > > > > > user wants.
> > > > > >
> > > > > > 2. Right now, the user can specify "delete" as the retention
> policy
> > > > and a
> > > > > > log segment will be deleted either when the size of a partition
> > > > exceeds a
> > > > > > threshold or the timestamp of a segment is older than a relative
> > > period
> > > > > of
> > > > > > time (say 7 days) from now. What you are proposing is not a new
> > > > retention
> > > > > > policy, but an additional check that will cause a segment to be
> > > deleted
> > > > > > when the timestamp of a segment is older than an absolute
> > timestamp?
> > > If
> > > > > so,
> > > > > >

[jira] [Commented] (KAFKA-3260) Increase the granularity of commit for SourceTask

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157935#comment-15157935
 ] 

ASF GitHub Bot commented on KAFKA-3260:
---

GitHub user jcustenborder opened a pull request:

https://github.com/apache/kafka/pull/950

KAFKA-3260 - Added SourceTask.commitRecord

Added commitRecord(SourceRecord record) to SourceTask. This method is 
called during the callback from producer.send() when the message has been sent 
successfully. Added commitTaskRecord(SourceRecord record) to WorkerSourceTask 
to handle calling commitRecord on the SourceTask. Updated tests for calls to 
commitRecord.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcustenborder/kafka KAFKA-3260

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/950.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #950


commit f4e1826f659af99e39189d45af214fd9f030b77b
Author: Jeremy Custenborder 
Date:   2016-02-22T23:22:02Z

KAFKA-3260 - Added commitRecord(SourceRecord record) to SourceTask. This 
method during the callback from producer.send() when the message has been sent 
successfully. Added commitTaskRecord(SourceRecord record) to WorkerSourceTask 
to handle calling commitRecord on the SourceTask. Updated tests for calls to 
commitRecord.




> Increase the granularity of commit for SourceTask
> -
>
> Key: KAFKA-3260
> URL: https://issues.apache.org/jira/browse/KAFKA-3260
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Affects Versions: 0.9.0.1
>Reporter: Jeremy Custenborder
>Assignee: Ewen Cheslack-Postava
>
> As of right now when commit is called the developer does not know which 
> messages have been accepted since the last poll. I'm proposing that we extend 
> the SourceTask class to allow records to be committed individually.
> {code}
> public void commitRecord(SourceRecord record) throws InterruptedException 
> {
> // This space intentionally left blank.
> }
> {code}
> This method could be overridden to receive a SourceRecord during the callback 
> of producer.send. This will give us messages that have been successfully 
> written to Kafka. The developer then has the capability to commit messages to 
> the source individually or in batch.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3260 - Added SourceTask.commitRecord

2016-02-22 Thread jcustenborder
GitHub user jcustenborder opened a pull request:

https://github.com/apache/kafka/pull/950

KAFKA-3260 - Added SourceTask.commitRecord

Added commitRecord(SourceRecord record) to SourceTask. This method is 
called during the callback from producer.send() when the message has been sent 
successfully. Added commitTaskRecord(SourceRecord record) to WorkerSourceTask 
to handle calling commitRecord on the SourceTask. Updated tests for calls to 
commitRecord.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jcustenborder/kafka KAFKA-3260

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/950.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #950


commit f4e1826f659af99e39189d45af214fd9f030b77b
Author: Jeremy Custenborder 
Date:   2016-02-22T23:22:02Z

KAFKA-3260 - Added commitRecord(SourceRecord record) to SourceTask. This 
method during the callback from producer.send() when the message has been sent 
successfully. Added commitTaskRecord(SourceRecord record) to WorkerSourceTask 
to handle calling commitRecord on the SourceTask. Updated tests for calls to 
commitRecord.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Kafka KIP meeting Feb 23 at 11:00am PST

2016-02-22 Thread Rajini Sivaram
Jun,

Could we also discuss *KIP-43: Kafka SASL enhancements* in the meeting
tomorrow?

Thank you.

On Mon, Feb 22, 2016 at 10:16 PM, Jun Rao  wrote:

> Hi, Everyone,
>
> We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
> attend but haven't received an invite, please let me know. The following is
> the agenda.
>
> Agenda:
>
> KIP-33 - Add a time based log index to Kafka
> KIP-47 - Add timestamp-based log deletion policy
>
> Thanks,
>
> Jun
>



-- 
Regards,

Rajini


[jira] [Commented] (KAFKA-2970) Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint class

2016-02-22 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157821#comment-15157821
 ] 

Ismael Juma commented on KAFKA-2970:


>From a protocol perspective, it's probably OK to go that way.

One concern I have is about API compatibility guarantees. Maybe these classes 
should live under an internal package so that we can actually change them if we 
have to? `o.a.k.common` classes are public API.

> Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint 
> class
> ---
>
> Key: KAFKA-2970
> URL: https://issues.apache.org/jira/browse/KAFKA-2970
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: chen zhu
>
> Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint 
> class which contain the same information. These should be consolidated for 
> simplicity and inter-opt. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka KIP meeting Feb 23 at 11:00am PST

2016-02-22 Thread Jun Rao
Hi, Everyone,

We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
attend but haven't received an invite, please let me know. The following is
the agenda.

Agenda:

KIP-33 - Add a time based log index to Kafka
KIP-47 - Add timestamp-based log deletion policy

Thanks,

Jun


[jira] [Updated] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-02-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3261:
-
Description: 
These two classes are serving similar purposes and can be consolidated. Also as 
[~sasakitoa] suggested we can remove their "uriParseExp" variables but use (a 
possibly modified)

{code}
private static final Pattern HOST_PORT_PATTERN = 
Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
{code}

in org.apache.kafka.common.utils.Utils instead.

  was:
These two classes are serving similar purposes and can be consolidated. Also we 
can remove their "uriParseExp" variable but use

{code}
private static final Pattern HOST_PORT_PATTERN = 
Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
{code}

in org.apache.kafka.common.utils.Utils instead.


> Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint
> -
>
> Key: KAFKA-3261
> URL: https://issues.apache.org/jira/browse/KAFKA-3261
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>
> These two classes are serving similar purposes and can be consolidated. Also 
> as [~sasakitoa] suggested we can remove their "uriParseExp" variables but use 
> (a possibly modified)
> {code}
> private static final Pattern HOST_PORT_PATTERN = 
> Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
> {code}
> in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3261) Consolidate class kafka.cluster.BrokerEndPoint and kafka.cluster.EndPoint

2016-02-22 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3261:


 Summary: Consolidate class kafka.cluster.BrokerEndPoint and 
kafka.cluster.EndPoint
 Key: KAFKA-3261
 URL: https://issues.apache.org/jira/browse/KAFKA-3261
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang


These two classes are serving similar purposes and can be consolidated. Also we 
can remove their "uriParseExp" variable but use

{code}
private static final Pattern HOST_PORT_PATTERN = 
Pattern.compile(".*?\\[?([0-9a-zA-Z\\-.:]*)\\]?:([0-9]+)");
{code}

in org.apache.kafka.common.utils.Utils instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157780#comment-15157780
 ] 

ASF GitHub Bot commented on KAFKA-3256:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/949

KAFKA-3256: Add print.timestamp option to console consumer.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3256

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/949.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #949


commit f4a2ebd5feb75cde8b44b3cb1512152805259383
Author: Jiangjie Qin 
Date:   2016-02-21T06:03:27Z

KAFKA-3256: Add print.timestamp option to console consumer. It is disabled 
by default




> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3256:

Status: Patch Available  (was: Open)

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3256: Add print.timestamp option to cons...

2016-02-22 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/949

KAFKA-3256: Add print.timestamp option to console consumer.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3256

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/949.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #949


commit f4a2ebd5feb75cde8b44b3cb1512152805259383
Author: Jiangjie Qin 
Date:   2016-02-21T06:03:27Z

KAFKA-3256: Add print.timestamp option to console consumer. It is disabled 
by default




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3248) AdminClient Blocks Forever in send Method

2016-02-22 Thread Warren Green (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Warren Green reassigned KAFKA-3248:
---

Assignee: Warren Green

> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Assignee: Warren Green
>Priority: Minor
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #381

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Update streams RocksDb to 4.1.0

[wangguoz] HOTFIX: make sure to go through all shutdown steps

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-us1 (Ubuntu ubuntu ubuntu-us) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision ff7b0f5b467bdf553584fb253b00f460dfbe8943 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ff7b0f5b467bdf553584fb253b00f460dfbe8943
 > git rev-list e3ab96b2f0b429e0fe5991a185cd980b0d490e25 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4239058679806272168.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 20.519 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4723018064029964210.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.11/userguide/gradle_daemon.html.

FAILURE: Build failed with an exception.

* Where:
Build file ' 
line: 55

* What went wrong:
A problem occurred evaluating root project 'kafka-trunk-jdk8'.
> Could not open cp_dsl class cache for script 
> '
>  
> (/home/jenkins/.gradle/caches/2.11/scripts/dependencies_eogbowi2wv0ybu46z2ja62p2m/cp_dsl).
   > Timeout waiting to lock cp_dsl class cache for script 
'
 
(/home/jenkins/.gradle/caches/2.11/scripts/dependencies_eogbowi2wv0ybu46z2ja62p2m/cp_dsl).
 It is currently in use by another Gradle instance.
 Owner PID: unknown
 Our PID: 10198
 Owner Operation: unknown
 Our operation: Initialize cache
 Lock file: 
/home/jenkins/.gradle/caches/2.11/scripts/dependencies_eogbowi2wv0ybu46z2ja62p2m/cp_dsl/cache.properties.lock

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 1 mins 11.284 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-22 Thread Joel Koshy
+1

Thanks for bringing it up

On Mon, Feb 22, 2016 at 9:36 AM, Ismael Juma  wrote:

> Hi all,
>
> The new Java producer was introduced in 0.8.2.0 (released in February
> 2015). It has become the default implementation for various tools since
> 0.9.0.0 (released in October 2015) and it is the only implementation with
> support for the security features introduced in 0.9.0.0.
>
> Given this, I think there's a good argument for deprecating the old Scala
> producers for the next release (which is likely to be 0.10.0.0). This would
> give our users a stronger signal regarding our plans to focus on the new
> Java producer going forward.
>
> Note that this proposal is only about deprecating the old Scala producers
> as, in my opinion, it is too early to do the same for the old Scala
> consumers.
>
> Thoughts?
>
> Ismael
>


Build failed in Jenkins: kafka-trunk-jdk8 #380

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Example style improvements

--
[...truncated 5012 lines...]
org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testLockStateDirectory PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssginmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest

Build failed in Jenkins: kafka-trunk-jdk7 #1052

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Example style improvements

--
[...truncated 1479 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidJoinGroup PASSE

Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-22 Thread Neha Narkhede
+1

On Mon, Feb 22, 2016 at 1:25 PM, Guozhang Wang  wrote:

> +1. I feel it is the right time to do so in 0.10.0.0.
>
> On Tue, Feb 23, 2016 at 1:58 AM, Becket Qin  wrote:
>
> > +1 on deprecating old producer.
> >
> > On Mon, Feb 22, 2016 at 9:36 AM, Ismael Juma  wrote:
> >
> > > Hi all,
> > >
> > > The new Java producer was introduced in 0.8.2.0 (released in February
> > > 2015). It has become the default implementation for various tools since
> > > 0.9.0.0 (released in October 2015) and it is the only implementation
> with
> > > support for the security features introduced in 0.9.0.0.
> > >
> > > Given this, I think there's a good argument for deprecating the old
> Scala
> > > producers for the next release (which is likely to be 0.10.0.0). This
> > would
> > > give our users a stronger signal regarding our plans to focus on the
> new
> > > Java producer going forward.
> > >
> > > Note that this proposal is only about deprecating the old Scala
> producers
> > > as, in my opinion, it is too early to do the same for the old Scala
> > > consumers.
> > >
> > > Thoughts?
> > >
> > > Ismael
> > >
> >
>
>
>
> --
> -- Guozhang
>



-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-2970) Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint class

2016-02-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157728#comment-15157728
 ] 

Guozhang Wang commented on KAFKA-2970:
--

I merged in the patch of KAFKA-2757 which actually get rid of the classes not 
being aware it is intentional.

Personally I feel we do not need to use two separate classes but can use the 
o.a.k.common.Endpoint class given that we are confident (at least for now) 
these fields will not change in the future, or they will always change together 
if they will ever change. But we should get back to this issue when this 
assumption is broken. [~ijuma] thoughts?

> Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint 
> class
> ---
>
> Key: KAFKA-2970
> URL: https://issues.apache.org/jira/browse/KAFKA-2970
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: chen zhu
>
> Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint 
> class which contain the same information. These should be consolidated for 
> simplicity and inter-opt. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2214) kafka-reassign-partitions.sh --verify should return non-zero exit codes when reassignment is not completed yet

2016-02-22 Thread Jack Lund (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157726#comment-15157726
 ] 

Jack Lund commented on KAFKA-2214:
--

Has any progress been made on this, or has it been rejected? We also need for 
partition reassignment to return a non-zero exit code on failure.

> kafka-reassign-partitions.sh --verify should return non-zero exit codes when 
> reassignment is not completed yet
> --
>
> Key: KAFKA-2214
> URL: https://issues.apache.org/jira/browse/KAFKA-2214
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Michael Noll
>Assignee: Manikumar Reddy
>Priority: Minor
> Attachments: KAFKA-2214.patch, KAFKA-2214_2015-07-10_21:56:04.patch, 
> KAFKA-2214_2015-07-13_21:10:58.patch, KAFKA-2214_2015-07-14_15:31:12.patch, 
> KAFKA-2214_2015-07-14_15:40:49.patch, KAFKA-2214_2015-08-05_20:47:17.patch
>
>
> h4. Background
> The admin script {{kafka-reassign-partitions.sh}} should integrate better 
> with automation tools such as Ansible, which rely on scripts adhering to Unix 
> best practices such as appropriate exit codes on success/failure.
> h4. Current behavior (incorrect)
> When reassignments are still in progress {{kafka-reassign-partitions.sh}} 
> prints {{ERROR}} messages but returns an exit code of zero, which indicates 
> success.  This behavior makes it a bit cumbersome to integrate the script 
> into automation tools such as Ansible.
> {code}
> $ kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 
> --reassignment-json-file partitions-to-move.json --verify
> Status of partition reassignment:
> ERROR: Assigned replicas (316,324,311) don't match the list of replicas for 
> reassignment (316,324) for partition [mytopic,2]
> Reassignment of partition [mytopic,0] completed successfully
> Reassignment of partition [myothertopic,1] completed successfully
> Reassignment of partition [myothertopic,3] completed successfully
> ...
> $ echo $?
> 0
> # But preferably the exit code in the presence of ERRORs should be, say, 1.
> {code}
> h3. How to improve
> I'd suggest that, using the above as the running example, if there are any 
> {{ERROR}} entries in the output (i.e. if there are any assignments remaining 
> that don't match the desired assignments), then the 
> {{kafka-reassign-partitions.sh}}  should return a non-zero exit code.
> h3. Notes
> In Kafka 0.8.2 the output is a bit different: The ERROR messages are now 
> phrased differently.
> Before:
> {code}
> ERROR: Assigned replicas (316,324,311) don't match the list of replicas for 
> reassignment (316,324) for partition [mytopic,2]
> {code}
> Now:
> {code}
> Reassignment of partition [mytopic,2] is still in progress
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Anna Povzner (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157723#comment-15157723
 ] 

Anna Povzner commented on KAFKA-3256:
-

[~becket_qin] I wrote my comment without seeing yours. Yes, I think tear down 
timeout failures are unrelated and I don't think hey actually cause any issues 
([~geoffra] ?). 

I'll take on upgrade and compatibility system tests if you don't mind. 

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3245) need a way to specify the number of replicas for change log topics

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157720#comment-15157720
 ] 

ASF GitHub Bot commented on KAFKA-3245:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/948

KAFKA-3245: config for changelog replication factor

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka changelog_topic_replication

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/948.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #948


commit 9f3ec06214d6bdbad5833ffb3b68512ae9c58bbc
Author: Yasuhiro Matsuda 
Date:   2016-02-18T21:37:27Z

change log replication

commit ce5ebe42cdbc79a73fedccc5dbaf3d9c8d03597f
Author: Yasuhiro Matsuda 
Date:   2016-02-22T21:26:28Z

Merge branch 'trunk' of github.com:apache/kafka into 
changelog_topic_replication




> need a way to specify the number of replicas for change log topics
> --
>
> Key: KAFKA-3245
> URL: https://issues.apache.org/jira/browse/KAFKA-3245
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>
> Currently the number of replicas of auto-created change log topics is one. 
> This make stream processing not fault tolerant. A way to specify the number 
> of replicas in config is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3245: config for changelog replication f...

2016-02-22 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/948

KAFKA-3245: config for changelog replication factor

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka changelog_topic_replication

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/948.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #948


commit 9f3ec06214d6bdbad5833ffb3b68512ae9c58bbc
Author: Yasuhiro Matsuda 
Date:   2016-02-18T21:37:27Z

change log replication

commit ce5ebe42cdbc79a73fedccc5dbaf3d9c8d03597f
Author: Yasuhiro Matsuda 
Date:   2016-02-22T21:26:28Z

Merge branch 'trunk' of github.com:apache/kafka into 
changelog_topic_replication




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Anna Povzner (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157719#comment-15157719
 ] 

Anna Povzner commented on KAFKA-3256:
-

FYI: The upgrade test fails with this error:
java.lang.IllegalArgumentException: requirement failed: message.format.version 
0.10.0-IV0 cannot be used when inter.broker.protocol.version is set to 0.8.2

I think this is expected, right? We need to use 0.9.0 (or 0.8) message format 
in the first pass of upgrade in 0.8 to 0.10 upgrade test (which is what current 
upgrade test is testing), is that correct?

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-22 Thread Guozhang Wang
+1. I feel it is the right time to do so in 0.10.0.0.

On Tue, Feb 23, 2016 at 1:58 AM, Becket Qin  wrote:

> +1 on deprecating old producer.
>
> On Mon, Feb 22, 2016 at 9:36 AM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > The new Java producer was introduced in 0.8.2.0 (released in February
> > 2015). It has become the default implementation for various tools since
> > 0.9.0.0 (released in October 2015) and it is the only implementation with
> > support for the security features introduced in 0.9.0.0.
> >
> > Given this, I think there's a good argument for deprecating the old Scala
> > producers for the next release (which is likely to be 0.10.0.0). This
> would
> > give our users a stronger signal regarding our plans to focus on the new
> > Java producer going forward.
> >
> > Note that this proposal is only about deprecating the old Scala producers
> > as, in my opinion, it is too early to do the same for the old Scala
> > consumers.
> >
> > Thoughts?
> >
> > Ismael
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request: HOTFIX: check offset limits in streamtask when...

2016-02-22 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/947

HOTFIX: check offset limits in streamtask when recovering KTable store

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka hotfix2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/947.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #947


commit 2329249b89149ec048ca1d172292db85baa93ab6
Author: Yasuhiro Matsuda 
Date:   2016-02-22T21:24:45Z

HOTFIX: check offset limits in streamtask when recovering KTable store




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157715#comment-15157715
 ] 

Jiangjie Qin edited comment on KAFKA-3256 at 2/22/16 9:23 PM:
--

Thanks for the investigation [~apovzner]. It looks upgrade test failed because 
the default message.format.version is higher than inter.broker.version. And 
producer compatibility test failed because the producer was not able to parse 
timestamp field in the produce response.

It is not clear to me that why we see tear down timeout failures in other 
tests. Those failures did not affect the test results but from the logs it 
seems all the servers have successfully shutdown.

I agree that we can check in this patch first and fix upgrade test and producer 
compatibility test in the other two separate patches.


was (Author: becket_qin):
Thanks for the investigation [~apovzner]. It looks upgrade test tests failed 
because the default message.format.version is higher than inter.broker.version. 
And producer compatibility test failed because the producer was not able to 
parse timestamp field in the produce response.

It is not clear to me that why we see tear down timeout failures in other 
tests. Those failures did not affect the test results but from the logs it 
seems all the servers have successfully shutdown.

I agree that we can check in this patch first and fix upgrade test and producer 
compatibility test in the other two separate patches.

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157715#comment-15157715
 ] 

Jiangjie Qin commented on KAFKA-3256:
-

Thanks for the investigation [~apovzner]. It looks upgrade test tests failed 
because the default message.format.version is higher than inter.broker.version. 
And producer compatibility test failed because the producer was not able to 
parse timestamp field in the produce response.

It is not clear to me that why we see tear down timeout failures in other 
tests. Those failures did not affect the test results but from the logs it 
seems all the servers have successfully shutdown.

I agree that we can check in this patch first and fix upgrade test and producer 
compatibility test in the other two separate patches.

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: make sure to go through all shutdown s...

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/928


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Update streams RocksDb to 4.1.0

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/937


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1051

2016-02-22 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3255: Added unit tests for NetworkClient.connectionDelay(Node

--
[...truncated 2642 lines...]

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > 

[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Anna Povzner (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157604#comment-15157604
 ] 

Anna Povzner commented on KAFKA-3256:
-

[~becket_qin], [~ijuma], [~geoffra] The remaining system tests are 
compatibility test and rolling upgrade tests. The issue is that both tests 
assume trunk to be 0.9. Since we are testing 0.8 to 0.9 upgrade tests (and 
similarly compatibility tests) in 0.9 branch, we don't need to port the tests 
to get 0.9 version vs. trunk. We have separate JIRAs (KAFKA-3201 and 
KAFKA-3188) to add 0.8 to 0.10 and 0.9 to 0.10 upgrade tests, and test 
compatibility of mix of 0.9 and 0.10 clients with 0.10 brokers. My proposal to 
have a patch with current fixes, and address compatibility and upgrade test 
failures as part of KAFKA-3201 and KAFKA-3188, which are currently assigned to 
me.

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3256) Large number of system test failures

2016-02-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157580#comment-15157580
 ] 

Jiangjie Qin commented on KAFKA-3256:
-

[~geoffra] It seems that yesterday's build had two tests failure. [~ijuma] 
kicked a off a new build with better logging. It shows many broker tear down 
failures. I'll take a looks at the log to see  what happened.

> Large number of system test failures
> 
>
> Key: KAFKA-3256
> URL: https://issues.apache.org/jira/browse/KAFKA-3256
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Jiangjie Qin
>
> Confluent's nightly run of the kafka system tests reported a large number of 
> failures beginning 2/20/2016
> Test run: 2016-02-19--001.1455897182--apache--trunk--eee9522/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-19--001.1455897182--apache--trunk--eee9522/report.html
> Pass: 136
> Fail: 0
> Test run: 2016-02-20--001.1455979842--apache--trunk--5caa800/
> Link: 
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-02-20--001.1455979842--apache--trunk--5caa800/report.html
> Pass: 72
> Fail: 64
> I.e. trunk@eee9522 was the last passing run, and trunk@5caa800 had a large 
> number of failures.
> Given its complexity, the most likely culprit is 45c8195fa, and I confirmed 
> this is the first commit with failures on a small number of tests.
> [~becket_qin] do you mind investigating?
> {code}
> commit 5caa800e217c6b83f62ee3e6b5f02f56e331b309
> Author: Jun Rao 
> Date:   Fri Feb 19 09:40:59 2016 -0800
> trivial fix to authorization CLI table
> commit 45c8195fa14c766b200c720f316836dbb84e9d8b
> Author: Jiangjie Qin 
> Date:   Fri Feb 19 07:56:40 2016 -0800
> KAFKA-3025; Added timetamp to Message and use relative offset.
> commit eee95228fabe1643baa016a2d49fb0a9fe2c66bd
> Author: Yasuhiro Matsuda 
> Date:   Thu Feb 18 09:39:30 2016 +0800
> MINOR: remove streams config params from producer/consumer configs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Example style improvements

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/940


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3093: Add Connect status tracking API

2016-02-22 Thread hachikuji
GitHub user hachikuji reopened a pull request:

https://github.com/apache/kafka/pull/920

KAFKA-3093: Add Connect status tracking API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3093

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/920.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #920


commit 9cf01b36020816d034af539b6e91109a073c3c4c
Author: Jason Gustafson 
Date:   2016-02-10T23:25:25Z

KAFKA-3093 [WIP]: Add status tracking API

commit ccf93d8ca32d63ded27866d50a8dd5f5576e8d01
Author: Jason Gustafson 
Date:   2016-02-16T21:06:27Z

additional cleanup and testing

commit 3e8a938fb907dc7683d827db23dce1cd8f44c310
Author: Jason Gustafson 
Date:   2016-02-16T21:08:16Z

remove unneeded test

commit 3ba67c37c42063f14957eccf38b90d3fc8d167c1
Author: Jason Gustafson 
Date:   2016-02-16T22:10:00Z

improve docs and cancel method to WorkerTask

commit 05d8dc81e1d61eaef590fb81cad26106f4e8a85e
Author: Jason Gustafson 
Date:   2016-02-17T03:36:36Z

testing/fixes

commit f7a81fe5e96f2a1ba420c060595152738eb6054f
Author: Jason Gustafson 
Date:   2016-02-17T18:05:54Z

add more testing

commit 8e9047422ae63c8aece23343f7920055ff10a056
Author: Jason Gustafson 
Date:   2016-02-17T18:49:47Z

fix checkstyle error

commit e3cdc47a070271dbe49424943852680174a48e91
Author: Jason Gustafson 
Date:   2016-02-19T23:51:22Z

make Herder get connector/task status API synchronous

commit 781a4f9378dd24c1ab963ed3c9179e4a261aa255
Author: Jason Gustafson 
Date:   2016-02-19T23:53:21Z

remove unused lifecycle listener in WorkerTask

commit d9849c7f65f2b7a4b2d45c9cb3454d2e5db0776d
Author: Jason Gustafson 
Date:   2016-02-20T00:07:59Z

batch stopping/awaiting tasks in herders

commit 62dda0f2d5a6a049fc8b33de505e4e20af58560c
Author: Jason Gustafson 
Date:   2016-02-20T00:10:30Z

workerId should be worker_id in status response

commit 624e355cb28d76a7cb0efc272f6d3b236329cd73
Author: Jason Gustafson 
Date:   2016-02-22T18:43:20Z

add retry and max in-flight requests config for KafkaBasedLog




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157534#comment-15157534
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/920


> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157535#comment-15157535
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

GitHub user hachikuji reopened a pull request:

https://github.com/apache/kafka/pull/920

KAFKA-3093: Add Connect status tracking API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3093

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/920.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #920


commit 9cf01b36020816d034af539b6e91109a073c3c4c
Author: Jason Gustafson 
Date:   2016-02-10T23:25:25Z

KAFKA-3093 [WIP]: Add status tracking API

commit ccf93d8ca32d63ded27866d50a8dd5f5576e8d01
Author: Jason Gustafson 
Date:   2016-02-16T21:06:27Z

additional cleanup and testing

commit 3e8a938fb907dc7683d827db23dce1cd8f44c310
Author: Jason Gustafson 
Date:   2016-02-16T21:08:16Z

remove unneeded test

commit 3ba67c37c42063f14957eccf38b90d3fc8d167c1
Author: Jason Gustafson 
Date:   2016-02-16T22:10:00Z

improve docs and cancel method to WorkerTask

commit 05d8dc81e1d61eaef590fb81cad26106f4e8a85e
Author: Jason Gustafson 
Date:   2016-02-17T03:36:36Z

testing/fixes

commit f7a81fe5e96f2a1ba420c060595152738eb6054f
Author: Jason Gustafson 
Date:   2016-02-17T18:05:54Z

add more testing

commit 8e9047422ae63c8aece23343f7920055ff10a056
Author: Jason Gustafson 
Date:   2016-02-17T18:49:47Z

fix checkstyle error

commit e3cdc47a070271dbe49424943852680174a48e91
Author: Jason Gustafson 
Date:   2016-02-19T23:51:22Z

make Herder get connector/task status API synchronous

commit 781a4f9378dd24c1ab963ed3c9179e4a261aa255
Author: Jason Gustafson 
Date:   2016-02-19T23:53:21Z

remove unused lifecycle listener in WorkerTask

commit d9849c7f65f2b7a4b2d45c9cb3454d2e5db0776d
Author: Jason Gustafson 
Date:   2016-02-20T00:07:59Z

batch stopping/awaiting tasks in herders

commit 62dda0f2d5a6a049fc8b33de505e4e20af58560c
Author: Jason Gustafson 
Date:   2016-02-20T00:10:30Z

workerId should be worker_id in status response

commit 624e355cb28d76a7cb0efc272f6d3b236329cd73
Author: Jason Gustafson 
Date:   2016-02-22T18:43:20Z

add retry and max in-flight requests config for KafkaBasedLog




> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3093: Add Connect status tracking API

2016-02-22 Thread hachikuji
Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/920


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3248) AdminClient Blocks Forever in send Method

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157515#comment-15157515
 ] 

ASF GitHub Bot commented on KAFKA-3248:
---

GitHub user WarrenGreen opened a pull request:

https://github.com/apache/kafka/pull/946

KAFKA-3248: AdminClient Blocks Forever in send Method

 Block while in bounds of timeout

 Author: Warren Green 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/WarrenGreen/kafka KAFKA-3248

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/946.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #946


commit aa5e556413e32b4d8205cf9f5c5379321e659526
Author: Warren Green 
Date:   2016-02-22T19:09:51Z

KAFKA-3248: AdminClient Blocks Forever in send Method

 Block while in bounds of timeout

 Author: Warren Green 




> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Priority: Minor
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3248: AdminClient Blocks Forever in send...

2016-02-22 Thread WarrenGreen
GitHub user WarrenGreen opened a pull request:

https://github.com/apache/kafka/pull/946

KAFKA-3248: AdminClient Blocks Forever in send Method

 Block while in bounds of timeout

 Author: Warren Green 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/WarrenGreen/kafka KAFKA-3248

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/946.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #946


commit aa5e556413e32b4d8205cf9f5c5379321e659526
Author: Warren Green 
Date:   2016-02-22T19:09:51Z

KAFKA-3248: AdminClient Blocks Forever in send Method

 Block while in bounds of timeout

 Author: Warren Green 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3255) Extra unit tests for NetworkClient.connectionDelay(Node node, long now)

2016-02-22 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3255:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 941
[https://github.com/apache/kafka/pull/941]

> Extra unit tests for NetworkClient.connectionDelay(Node node, long now)
> ---
>
> Key: KAFKA-3255
> URL: https://issues.apache.org/jira/browse/KAFKA-3255
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Frank Scholten
>Priority: Trivial
>  Labels: test
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-3255.patch
>
>
> I am exploring the Kafka codebase and noticed that this method was not 
> covered so I added some tests. Also saw that the method isConnecting is not 
> used anywhere in the code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3255) Extra unit tests for NetworkClient.connectionDelay(Node node, long now)

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157408#comment-15157408
 ] 

ASF GitHub Bot commented on KAFKA-3255:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/941


> Extra unit tests for NetworkClient.connectionDelay(Node node, long now)
> ---
>
> Key: KAFKA-3255
> URL: https://issues.apache.org/jira/browse/KAFKA-3255
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Frank Scholten
>Priority: Trivial
>  Labels: test
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-3255.patch
>
>
> I am exploring the Kafka codebase and noticed that this method was not 
> covered so I added some tests. Also saw that the method isConnecting is not 
> used anywhere in the code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3255 Added unit tests for NetworkClient....

2016-02-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/941


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3260) Increase the granularity of commit for SourceTask

2016-02-22 Thread Jeremy Custenborder (JIRA)
Jeremy Custenborder created KAFKA-3260:
--

 Summary: Increase the granularity of commit for SourceTask
 Key: KAFKA-3260
 URL: https://issues.apache.org/jira/browse/KAFKA-3260
 Project: Kafka
  Issue Type: Improvement
  Components: copycat
Affects Versions: 0.9.0.1
Reporter: Jeremy Custenborder
Assignee: Ewen Cheslack-Postava


As of right now when commit is called the developer does not know which 
messages have been accepted since the last poll. I'm proposing that we extend 
the SourceTask class to allow records to be committed individually.

{code}
public void commitRecord(SourceRecord record) throws InterruptedException {
// This space intentionally left blank.
}
{code}

This method could be overridden to receive a SourceRecord during the callback 
of producer.send. This will give us messages that have been successfully 
written to Kafka. The developer then has the capability to commit messages to 
the source individually or in batch.   




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: When to use advertised.host.name ?

2016-02-22 Thread Alex Loddengaard
Hi Eugene,

First, please post questions like this to the users list in the future. The
dev list is much busier and it's easier for your email to get lost. And,
this list is meant for development on Kafka internals.

As far as your question, EC2 is a great example of when you may want to
configure an advertised hostname. Partially this is because often EC2
instances don't have reasonable hostname defaults, and partially because
often the hostname your clients (producers/consumers) connect to will be
different from what the broker binds to. In my own tests in EC2, the broker
will bind to localhost, but clients connect to either an internal IP or a
CNAME DNS record.

I should also explain how client configuration works. ZooKeeper stores the
hostnames of all brokers (either the hostname or the advertised hostname if
it's configured). In 0.9 and the 0.8 producer, clients are configured with
a bootstrapped list of Kafka brokers. The 0.8 consumer is configured with
ZooKeeper. In both cases, the client makes requests (either to a broker, or
to ZooKeeper), to fetch all broker hostnames and begin interacting with the
cluster. As such, in EC2, the advertised hostname is required unless you
setup the hostname such that clients can connect to it and the broker can
bind to it.

Does that make sense?

Alex

On Thu, Feb 18, 2016 at 1:31 PM, eugene miretsky 
wrote:

> The FAQ says:
>
> "When a broker starts up, it registers its ip/port in ZK. You need to make
> sure the registered ip is consistent with what's listed in
> metadata.broker.list in the producer config. By default, the registered ip
> is given by InetAddress.getLocalHost.getHostAddress. Typically, this should
> return the real ip of the host. However, sometimes (e.g., in EC2), the
> returned ip is an internal one and can't be connected to from outside. The
> solution is to explicitly set the host ip to be registered in ZK by setting
> the "hostname" property in server.properties. In another rare case where
> the binding host/port is different from the host/port for client
> connection, you can set advertised.host.name and advertised.port for
> client
> connection."
>
> Can somebody give an example for that "rare case" where the binding
> host/port is different from the host/port for client connection?
>
> Cheers,
> Eugene
>



-- 
*Alex Loddengaard | **Solutions Architect | Confluent*
*Download Apache Kafka and Confluent Platform: www.confluent.io/download
*


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-22 Thread Becket Qin
+1 on deprecating old producer.

On Mon, Feb 22, 2016 at 9:36 AM, Ismael Juma  wrote:

> Hi all,
>
> The new Java producer was introduced in 0.8.2.0 (released in February
> 2015). It has become the default implementation for various tools since
> 0.9.0.0 (released in October 2015) and it is the only implementation with
> support for the security features introduced in 0.9.0.0.
>
> Given this, I think there's a good argument for deprecating the old Scala
> producers for the next release (which is likely to be 0.10.0.0). This would
> give our users a stronger signal regarding our plans to focus on the new
> Java producer going forward.
>
> Note that this proposal is only about deprecating the old Scala producers
> as, in my opinion, it is too early to do the same for the old Scala
> consumers.
>
> Thoughts?
>
> Ismael
>


Re: [DISCUSS] KIP-47 - Add timestamp-based log deletion policy

2016-02-22 Thread Becket Qin
Hi Jun,

I think it makes sense to implement KIP-47 after KIP-33 so we can make it
work for both LogAppendTime and CreateTime.

And yes, I'm actively working on KIP-33. I had a voting thread on KIP-33
before and I'll bump it up.

Thanks,

Jiangjie (Becket) Qin



On Mon, Feb 22, 2016 at 9:11 AM, Jun Rao  wrote:

> Becket,
>
> Since you submitted KIP-33, are you actively working on that? If so, it
> would make sense to implement KIP-47 after KIP-33 so that it works for both
> CreateTime and LogAppendTime.
>
> Thanks,
>
> Jun
>
>
>
>
> On Fri, Feb 19, 2016 at 6:25 PM, Bill Warshaw  wrote:
>
> > Hi Jun,
> >
> > 1.  I thought more about Andrew's comment about LogAppendTime.  The
> > time-based index you are referring to is associated with KIP-33, correct?
> > Currently my implementation is just checking the last message in a
> segment,
> > so we're restricted to LogAppendTime.  When the work for KIP-33 is
> > completed, it sounds like CreateTime would also be valid.  Do you happen
> to
> > know if anyone is currently working on KIP-33?
> >
> > 2. I did update the wiki after reading your original comment, but reading
> > over it again I realize I could word a couple things more clearly.  I
> will
> > do that tonight.
> >
> > Bill
> >
> > On Fri, Feb 19, 2016 at 7:02 PM, Jun Rao  wrote:
> >
> > > Hi, Bill,
> > >
> > > I replied with the following comments earlier to the thread. Did you
> see
> > > that?
> > >
> > > Thanks for the proposal. A couple of comments.
> > >
> > > 1. It seems that this new policy should work for CreateTime as well.
> If a
> > > topic is configured with CreateTime, messages may not be added in
> strict
> > > order in the log. However, to build a time-based index, we will be
> > > maintaining the largest timestamp for all messages in a log segment. We
> > can
> > > delete a segment if its largest timestamp is less than
> > > log.retention.min.timestamp. This guarantees that no messages newer
> than
> > > log.retention.min.timestamp will be deleted, which is probably what the
> > > user wants.
> > >
> > > 2. Right now, the user can specify "delete" as the retention policy
> and a
> > > log segment will be deleted either when the size of a partition
> exceeds a
> > > threshold or the timestamp of a segment is older than a relative period
> > of
> > > time (say 7 days) from now. What you are proposing is not a new
> retention
> > > policy, but an additional check that will cause a segment to be deleted
> > > when the timestamp of a segment is older than an absolute timestamp? If
> > so,
> > > could you update the wiki accordingly?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Fri, Feb 19, 2016 at 2:57 PM, Bill Warshaw 
> > wrote:
> > >
> > > > Hello all,
> > > >
> > > > What is the next step with this proposal?  The work for KIP-32 that
> it
> > > was
> > > > based off merged earlier today (
> > https://github.com/apache/kafka/pull/764
> > > ,
> > > > thank you Becket).  I have an implementation with tests, and I've
> > > confirmed
> > > > that it actually works in a live system.  Is there more discussion
> that
> > > > needs to be had about this KIP, or should I start a VOTE thread?
> > > >
> > > >
> > > >
> > > > On Tue, Feb 16, 2016 at 5:06 PM, Jun Rao  wrote:
> > > >
> > > > > Bill,
> > > > >
> > > > > Thanks for the proposal. A couple of comments.
> > > > >
> > > > > 1. It seems that this new policy should work for CreateTime as
> well.
> > > If a
> > > > > topic is configured with CreateTime, messages may not be added in
> > > strict
> > > > > order in the log. However, to build a time-based index, we will be
> > > > > maintaining the largest timestamp for all messages in a log
> segment.
> > We
> > > > can
> > > > > delete a segment if its largest timestamp is less than
> > > > > log.retention.min.timestamp. This guarantees that no messages newer
> > > than
> > > > > log.retention.min.timestamp will be deleted, which is probably what
> > the
> > > > > user wants.
> > > > >
> > > > > 2. Right now, the user can specify "delete" as the retention policy
> > > and a
> > > > > log segment will be deleted either when the size of a partition
> > > exceeds a
> > > > > threshold or the timestamp of a segment is older than a relative
> > period
> > > > of
> > > > > time (say 7 days) from now. What you are proposing is not a new
> > > retention
> > > > > policy, but an additional check that will cause a segment to be
> > deleted
> > > > > when the timestamp of a segment is older than an absolute
> timestamp?
> > If
> > > > so,
> > > > > could you update the wiki accordingly?
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > >
> > > > > On Sat, Feb 13, 2016 at 3:23 PM, Bill Warshaw  >
> > > > wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > That is a good catch, thanks for pointing it out.  If this KIP is
> > > > > accepted,
> > > > > > we'd need to document this and make the log cleaner not run
> > > > > timestamp-based
> > > > > > deletion unless message.timestamp.type=LogAppen

[DISCUSS] Deprecating the old Scala producers for the next release

2016-02-22 Thread Ismael Juma
Hi all,

The new Java producer was introduced in 0.8.2.0 (released in February
2015). It has become the default implementation for various tools since
0.9.0.0 (released in October 2015) and it is the only implementation with
support for the security features introduced in 0.9.0.0.

Given this, I think there's a good argument for deprecating the old Scala
producers for the next release (which is likely to be 0.10.0.0). This would
give our users a stronger signal regarding our plans to focus on the new
Java producer going forward.

Note that this proposal is only about deprecating the old Scala producers
as, in my opinion, it is too early to do the same for the old Scala
consumers.

Thoughts?

Ismael


[jira] [Commented] (KAFKA-1476) Get a list of consumer groups

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157349#comment-15157349
 ] 

ASF GitHub Bot commented on KAFKA-1476:
---

GitHub user christian-posta opened a pull request:

https://github.com/apache/kafka/pull/945

tidy up spacing for ConsumerGroupCommand related to KAFKA-1476 …

https://issues.apache.org/jira/browse/KAFKA-1476

Let me know if these kind of contributions should have their own requisite 
JIRA opened in advance.

Cheers..

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/christian-posta/kafka 
ceposta-tidy-up-consumer-groups-describe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/945.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #945


commit dd9ab774dbe4105666de012212d565c0a0ec2ffa
Author: Christian Posta 
Date:   2016-02-22T17:29:46Z

tidy up spacing for ConsumerGroupCommand related to 
https://issues.apache.org/jira/browse/KAFKA-1476




> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Onur Karaman
>  Labels: newbie
> Fix For: 0.9.0.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476.patch, 
> KAFKA-1476_2014-11-10_11:58:26.patch, KAFKA-1476_2014-11-10_12:04:01.patch, 
> KAFKA-1476_2014-11-10_12:06:35.patch, KAFKA-1476_2014-12-05_12:00:12.patch, 
> KAFKA-1476_2015-01-12_16:22:26.patch, KAFKA-1476_2015-01-12_16:31:20.patch, 
> KAFKA-1476_2015-01-13_10:36:18.patch, KAFKA-1476_2015-01-15_14:30:04.patch, 
> KAFKA-1476_2015-01-22_02:32:52.patch, KAFKA-1476_2015-01-30_11:09:59.patch, 
> KAFKA-1476_2015-02-04_15:41:50.patch, KAFKA-1476_2015-02-04_18:03:15.patch, 
> KAFKA-1476_2015-02-05_03:01:09.patch, KAFKA-1476_2015-02-09_14:37:30.patch, 
> sample-kafka-consumer-groups-sh-output-1-23-2015.txt, 
> sample-kafka-consumer-groups-sh-output-2-5-2015.txt, 
> sample-kafka-consumer-groups-sh-output-2-9-2015.txt, 
> sample-kafka-consumer-groups-sh-output.txt
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: tidy up spacing for ConsumerGroupCommand relat...

2016-02-22 Thread christian-posta
GitHub user christian-posta opened a pull request:

https://github.com/apache/kafka/pull/945

tidy up spacing for ConsumerGroupCommand related to KAFKA-1476 …

https://issues.apache.org/jira/browse/KAFKA-1476

Let me know if these kind of contributions should have their own requisite 
JIRA opened in advance.

Cheers..

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/christian-posta/kafka 
ceposta-tidy-up-consumer-groups-describe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/945.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #945


commit dd9ab774dbe4105666de012212d565c0a0ec2ffa
Author: Christian Posta 
Date:   2016-02-22T17:29:46Z

tidy up spacing for ConsumerGroupCommand related to 
https://issues.apache.org/jira/browse/KAFKA-1476




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2832) support exclude.internal.topics in new consumer

2016-02-22 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-2832:
---
Status: Patch Available  (was: Open)

> support exclude.internal.topics in new consumer
> ---
>
> Key: KAFKA-2832
> URL: https://issues.apache.org/jira/browse/KAFKA-2832
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> The old consumer supports exclude.internal.topics that prevents internal 
> topics from being consumed by default. It would be useful to add that in the 
> new consumer, especially when wildcards are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-47 - Add timestamp-based log deletion policy

2016-02-22 Thread Jun Rao
Becket,

Since you submitted KIP-33, are you actively working on that? If so, it
would make sense to implement KIP-47 after KIP-33 so that it works for both
CreateTime and LogAppendTime.

Thanks,

Jun




On Fri, Feb 19, 2016 at 6:25 PM, Bill Warshaw  wrote:

> Hi Jun,
>
> 1.  I thought more about Andrew's comment about LogAppendTime.  The
> time-based index you are referring to is associated with KIP-33, correct?
> Currently my implementation is just checking the last message in a segment,
> so we're restricted to LogAppendTime.  When the work for KIP-33 is
> completed, it sounds like CreateTime would also be valid.  Do you happen to
> know if anyone is currently working on KIP-33?
>
> 2. I did update the wiki after reading your original comment, but reading
> over it again I realize I could word a couple things more clearly.  I will
> do that tonight.
>
> Bill
>
> On Fri, Feb 19, 2016 at 7:02 PM, Jun Rao  wrote:
>
> > Hi, Bill,
> >
> > I replied with the following comments earlier to the thread. Did you see
> > that?
> >
> > Thanks for the proposal. A couple of comments.
> >
> > 1. It seems that this new policy should work for CreateTime as well. If a
> > topic is configured with CreateTime, messages may not be added in strict
> > order in the log. However, to build a time-based index, we will be
> > maintaining the largest timestamp for all messages in a log segment. We
> can
> > delete a segment if its largest timestamp is less than
> > log.retention.min.timestamp. This guarantees that no messages newer than
> > log.retention.min.timestamp will be deleted, which is probably what the
> > user wants.
> >
> > 2. Right now, the user can specify "delete" as the retention policy and a
> > log segment will be deleted either when the size of a partition exceeds a
> > threshold or the timestamp of a segment is older than a relative period
> of
> > time (say 7 days) from now. What you are proposing is not a new retention
> > policy, but an additional check that will cause a segment to be deleted
> > when the timestamp of a segment is older than an absolute timestamp? If
> so,
> > could you update the wiki accordingly?
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Feb 19, 2016 at 2:57 PM, Bill Warshaw 
> wrote:
> >
> > > Hello all,
> > >
> > > What is the next step with this proposal?  The work for KIP-32 that it
> > was
> > > based off merged earlier today (
> https://github.com/apache/kafka/pull/764
> > ,
> > > thank you Becket).  I have an implementation with tests, and I've
> > confirmed
> > > that it actually works in a live system.  Is there more discussion that
> > > needs to be had about this KIP, or should I start a VOTE thread?
> > >
> > >
> > >
> > > On Tue, Feb 16, 2016 at 5:06 PM, Jun Rao  wrote:
> > >
> > > > Bill,
> > > >
> > > > Thanks for the proposal. A couple of comments.
> > > >
> > > > 1. It seems that this new policy should work for CreateTime as well.
> > If a
> > > > topic is configured with CreateTime, messages may not be added in
> > strict
> > > > order in the log. However, to build a time-based index, we will be
> > > > maintaining the largest timestamp for all messages in a log segment.
> We
> > > can
> > > > delete a segment if its largest timestamp is less than
> > > > log.retention.min.timestamp. This guarantees that no messages newer
> > than
> > > > log.retention.min.timestamp will be deleted, which is probably what
> the
> > > > user wants.
> > > >
> > > > 2. Right now, the user can specify "delete" as the retention policy
> > and a
> > > > log segment will be deleted either when the size of a partition
> > exceeds a
> > > > threshold or the timestamp of a segment is older than a relative
> period
> > > of
> > > > time (say 7 days) from now. What you are proposing is not a new
> > retention
> > > > policy, but an additional check that will cause a segment to be
> deleted
> > > > when the timestamp of a segment is older than an absolute timestamp?
> If
> > > so,
> > > > could you update the wiki accordingly?
> > > >
> > > > Jun
> > > >
> > > >
> > > >
> > > > On Sat, Feb 13, 2016 at 3:23 PM, Bill Warshaw 
> > > wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > That is a good catch, thanks for pointing it out.  If this KIP is
> > > > accepted,
> > > > > we'd need to document this and make the log cleaner not run
> > > > timestamp-based
> > > > > deletion unless message.timestamp.type=LogAppendTime.
> > > > >
> > > > > On Sat, Feb 13, 2016 at 5:38 AM, Andrew Schofield <
> > > > > andrew_schofield_j...@outlook.com> wrote:
> > > > >
> > > > > > This KIP is related to KIP-32, but I strikes me that it only
> makes
> > > > sense
> > > > > > with one of the two proposed message timestamp types. If I
> > understand
> > > > > > correctly, message timestamps are only certain to be
> monotonically
> > > > > > increasing in the log if message.timestamp.type=LogAppendTime.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Does timestamp-based auto-expiration require use of
> > > > > > message.ti

[GitHub] kafka pull request: [WIP] Support multiple DNS entries for a given...

2016-02-22 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/508


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


re-build of the website/docs

2016-02-22 Thread Christian Posta
Wondering the process for pushing changes to the website/documentation?
I see the docs in HTML format in the github, but I also notice the version
on the website is slightly lagging the version in the source. How do those
sync up? What's the process to initiate changes to doc?

Thanks!


-- 
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io


[jira] [Updated] (KAFKA-3253) Skip duplicate message size check if there is no re-compression during log appending.

2016-02-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3253:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> Skip duplicate message size check if there is no re-compression during log 
> appending.
> -
>
> Key: KAFKA-3253
> URL: https://issues.apache.org/jira/browse/KAFKA-3253
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> In Log.append(), if the messages were not re-compressed, we don't need to 
> check the message size again because it has already been checked in 
> analyzeAndValidateMessageSet(). Also this second check is only needed when 
> assignOffsets is true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3259) KIP-31/KIP-32 clean-ups

2016-02-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3259:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> KIP-31/KIP-32 clean-ups
> ---
>
> Key: KAFKA-3259
> URL: https://issues.apache.org/jira/browse/KAFKA-3259
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> During review, I found a few things that could potentially be improved but 
> were not important enough to block the PR from being merged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3251) Requesting committed offsets results in inconsistent results

2016-02-22 Thread Dimitrij Denissenko (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15156819#comment-15156819
 ] 

Dimitrij Denissenko commented on KAFKA-3251:


Looks like it's fixed 0.9.0.1. Thanks

> Requesting committed offsets results in inconsistent results
> 
>
> Key: KAFKA-3251
> URL: https://issues.apache.org/jira/browse/KAFKA-3251
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.9.0.0
>Reporter: Dimitrij Denissenko
>Assignee: Jason Gustafson
>
> Hi,
> I am using github.com/Shopify/sarama to retrieve the committed offsets for a 
> high-volume topic, but the bug seems to be actually originating in Kafka 
> itself.
> I have written a little test to query the offsets of all partitions of one 
> topic, every second. The request looks like this:
> {code}
> OffsetFetchRequest{
>   ConsumerGroup: "my-group-name", 
>   Version: 1,
>   TopicPartitions: []TopicPartition{
>  {TopicName: "logs", Partitions: []int32{0,1,2,3,4,5,6,7}
>   }
> }
> {code}
> For most of the time, the responses are correct, but every 10 minutes or so, 
> there is a little glitch. I am not familiar with the Kafka internals, but it 
> looks like a little race. Here's my log output:
> {code}
> ...
> 2016/02/19 09:48:10 topic=logs partition=00 error=0 offset=206567925
> 2016/02/19 09:48:10 topic=logs partition=01 error=0 offset=206671019
> 2016/02/19 09:48:10 topic=logs partition=02 error=0 offset=206567995
> 2016/02/19 09:48:10 topic=logs partition=03 error=0 offset=205785315
> 2016/02/19 09:48:10 topic=logs partition=04 error=0 offset=206526677
> 2016/02/19 09:48:10 topic=logs partition=05 error=0 offset=206713764
> 2016/02/19 09:48:10 topic=logs partition=06 error=0 offset=206524006
> 2016/02/19 09:48:10 topic=logs partition=07 error=0 offset=206629121
> 2016/02/19 09:48:11 topic=logs partition=00 error=0 offset=206572870
> 2016/02/19 09:48:11 topic=logs partition=01 error=0 offset=206675966
> 2016/02/19 09:48:11 topic=logs partition=02 error=0 offset=206573267
> 2016/02/19 09:48:11 topic=logs partition=03 error=0 offset=205790613
> 2016/02/19 09:48:11 topic=logs partition=04 error=0 offset=206531841
> 2016/02/19 09:48:11 topic=logs partition=05 error=0 offset=206718513
> 2016/02/19 09:48:11 topic=logs partition=06 error=0 offset=206529762
> 2016/02/19 09:48:11 topic=logs partition=07 error=0 offset=206634037
> 2016/02/19 09:48:12 topic=logs partition=00 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=01 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=02 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=03 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=04 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=05 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=06 error=0 offset=-1
> 2016/02/19 09:48:12 topic=logs partition=07 error=0 offset=-1
> 2016/02/19 09:48:13 topic=logs partition=00 error=0 offset=-1
> 2016/02/19 09:48:13 topic=logs partition=01 error=0 offset=206686020
> 2016/02/19 09:48:13 topic=logs partition=02 error=0 offset=206583861
> 2016/02/19 09:48:13 topic=logs partition=03 error=0 offset=205800480
> 2016/02/19 09:48:13 topic=logs partition=04 error=0 offset=206542733
> 2016/02/19 09:48:13 topic=logs partition=05 error=0 offset=206728251
> 2016/02/19 09:48:13 topic=logs partition=06 error=0 offset=206534794
> 2016/02/19 09:48:13 topic=logs partition=07 error=0 offset=206643853
> 2016/02/19 09:48:14 topic=logs partition=00 error=0 offset=206584533
> 2016/02/19 09:48:14 topic=logs partition=01 error=0 offset=206690275
> 2016/02/19 09:48:14 topic=logs partition=02 error=0 offset=206588902
> 2016/02/19 09:48:14 topic=logs partition=03 error=0 offset=205805413
> 2016/02/19 09:48:14 topic=logs partition=04 error=0 offset=206542733
> 2016/02/19 09:48:14 topic=logs partition=05 error=0 offset=206733144
> 2016/02/19 09:48:14 topic=logs partition=06 error=0 offset=206540275
> 2016/02/19 09:48:14 topic=logs partition=07 error=0 offset=206649392
> ...
> {code}
> As you can see, the returned error code is 0 and there is no obvious reason 
> why the returned offsets are suddenly wrong/blank. 
> I have also added some debugging to our offset committer to make absolutely 
> sure the numbers we are sending are absolutely correct and they are. 
> Any help is greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka basic doubts

2016-02-22 Thread Pariksheet Barapatre
Hi All,

Greetings..!!! This is my first email to Kafka Community.

I have just started exploring Kafka on CDH5.5 cluster which ships with
Kafka 0.8.2.1.

I am able to run sample programs for producer as well as consumer (both
high level and low level).

Now I am trying to load messages from Kafka to HDFS in batch i.e. every
hour.

Managing an offsets at partition level, I guess will do a trick but I am
confused about offset itself. Is it a line number or byte offset.

I tried using Kangaroo project but no luck. It assumes offset as number of
bytes whereas I am getting line number as offset.

Also, Kafka Connect service is introduced in Kafka 0.9, does anybody tried
loading data from Kafka to HDFS using it.

Many Thanks in Advance.

Regards
Pari


Random email

2016-02-22 Thread Simon Cooper
Apologies for the random email, sent to the wrong mailing list. Please ignore...


Recall: Engine team

2016-02-22 Thread Simon Cooper
Simon Cooper would like to recall the message, "Engine team".

Engine team

2016-02-22 Thread Simon Cooper
Me:

-  Last week:

o   DB byte arrays refactor done

o   Last week did some work on the zapp performance issues, currently blocked 
as cluster is being used

o   Also lots of bugfixes around sandbox

-  This week:

o   Follow up tasks for byte arrays refactor

o   Looking at performance when cluster is free

o   Maybe getting onto DB encryption or blobstore later in the week if nothing 
else comes up

Daniel:

-  Continuing with the UI server. Lots of gremlins are popping up, 
leading to complications on getting sandbox in.

Vit:

-  Carrying on doing the Betfair poller, should get it done this 
week/early next week

-  Also some monitoring bugs


[jira] [Updated] (KAFKA-3258) BrokerTopicMetrics of deleted topics are never deleted

2016-02-22 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3258:
--
Status: Patch Available  (was: Open)

> BrokerTopicMetrics of deleted topics are never deleted
> --
>
> Key: KAFKA-3258
> URL: https://issues.apache.org/jira/browse/KAFKA-3258
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Per-topic BrokerTopicMetrics generated by brokers are not deleted even when 
> the topic is deleted. This shows misleading metrics in metrics reporters long 
> after a topic is deleted and is also a resource leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3258) BrokerTopicMetrics of deleted topics are never deleted

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15156715#comment-15156715
 ] 

ASF GitHub Bot commented on KAFKA-3258:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/944

KAFKA-3258: Delete broker topic metrics of deleted topics

Delete per-topic metrics when there are no replicas of any partitions of 
the topic on a broker.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3258

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/944.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #944






> BrokerTopicMetrics of deleted topics are never deleted
> --
>
> Key: KAFKA-3258
> URL: https://issues.apache.org/jira/browse/KAFKA-3258
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Per-topic BrokerTopicMetrics generated by brokers are not deleted even when 
> the topic is deleted. This shows misleading metrics in metrics reporters long 
> after a topic is deleted and is also a resource leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3258: Delete broker topic metrics of del...

2016-02-22 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/944

KAFKA-3258: Delete broker topic metrics of deleted topics

Delete per-topic metrics when there are no replicas of any partitions of 
the topic on a broker.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3258

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/944.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #944






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3259) KIP-31/KIP-32 clean-ups

2016-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15156659#comment-15156659
 ] 

ASF GitHub Bot commented on KAFKA-3259:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/943

KAFKA-3259 KAFKA-3253; KIP-31/KIP-32 Follow-up



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-3259-kip-31-32-clean-ups

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/943.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #943


commit cb6dda92656e26b69b8134ef76b7ceee4a26ee27
Author: Ismael Juma 
Date:   2016-02-18T09:59:44Z

Replace multiple catch blocks by a single one with multiple clauses

commit dd8382b38bfda6acb3b97d3cc5bb1f2e093fbc8b
Author: Ismael Juma 
Date:   2016-02-22T07:18:56Z

Several code style and documentation wording improvements

commit 7902940adabb07192d09e171e1709f72eb16adf3
Author: Ismael Juma 
Date:   2016-02-22T07:25:43Z

Use `Ordered` methods for `ApiVersion` comparison instead of `onOrAfter`

Also improve error-handling in `apply`

commit b1e95ab66e1e3c4e34bc7b9e03aab8757b66f5fa
Author: Ismael Juma 
Date:   2016-02-22T07:26:17Z

Move `MorrorMakerMessageHandler` node to correct section in upgrade notes

commit a2d6fa4b53a5299f8a1661fa553f33654d0775c8
Author: Ismael Juma 
Date:   2016-02-22T07:31:31Z

Eliminate redundant collection traversals in `LogCleaner`

commit c0a927e83e6dfdf0cc7c52bec279d186c42d2ff7
Author: Ismael Juma 
Date:   2016-02-22T07:43:34Z

Don't mutate passed in `topicConfig` in `TopicConfigHandler`

commit 7846aef0b012f3f2f197a65679cb2f527b90cd0c
Author: Ismael Juma 
Date:   2016-02-22T07:57:31Z

Also test `UncleanLeaderElectionEnableProp` in `testFromPropsInvalid`

commit 45b0e1eb01fbd39549b635c0416e377481879e34
Author: Ismael Juma 
Date:   2016-02-22T08:05:45Z

Use `convertedPartitionData` when throttling clients

commit cd726334e88d4ff255c1432eb5c9d782f8090242
Author: Ismael Juma 
Date:   2016-02-22T08:36:33Z

KAFKA-3253: Skip duplicate message size check if there is no re-compression 
during log appending

commit c2ec781f689177f85b6517d971467afea7547318
Author: Ismael Juma 
Date:   2016-02-22T08:51:15Z

A few clean-ups in `ByteBufferMessageSet`

The most notable one is using `LongRef` instead of
`AtomicLong`, which meant that many other classes
also had to be updated.

commit 4955f953336e8218be861e86550e96440a4d10e7
Author: Ismael Juma 
Date:   2016-02-22T08:52:04Z

Use `TimestampType` and `ApiVersion` in `KafkaConfig` instead of `String`

commit 635178ad4cb647439d1133b0030394ec43e1658f
Author: Ismael Juma 
Date:   2016-02-22T08:53:29Z

Add `is` prefix to `magicValueInAllWrapperMessages` method

This makes it clear that it returns a `boolean`

commit 6f68a3b68d7e5c275f1606ac0424a9ca8ec300d3
Author: Ismael Juma 
Date:   2016-02-22T09:02:29Z

Improve `TimestampType` naming and switch static to instance method




> KIP-31/KIP-32 clean-ups
> ---
>
> Key: KAFKA-3259
> URL: https://issues.apache.org/jira/browse/KAFKA-3259
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> During review, I found a few things that could potentially be improved but 
> were not important enough to block the PR from being merged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3259 KAFKA-3253; KIP-31/KIP-32 Follow-up

2016-02-22 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/943

KAFKA-3259 KAFKA-3253; KIP-31/KIP-32 Follow-up



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-3259-kip-31-32-clean-ups

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/943.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #943


commit cb6dda92656e26b69b8134ef76b7ceee4a26ee27
Author: Ismael Juma 
Date:   2016-02-18T09:59:44Z

Replace multiple catch blocks by a single one with multiple clauses

commit dd8382b38bfda6acb3b97d3cc5bb1f2e093fbc8b
Author: Ismael Juma 
Date:   2016-02-22T07:18:56Z

Several code style and documentation wording improvements

commit 7902940adabb07192d09e171e1709f72eb16adf3
Author: Ismael Juma 
Date:   2016-02-22T07:25:43Z

Use `Ordered` methods for `ApiVersion` comparison instead of `onOrAfter`

Also improve error-handling in `apply`

commit b1e95ab66e1e3c4e34bc7b9e03aab8757b66f5fa
Author: Ismael Juma 
Date:   2016-02-22T07:26:17Z

Move `MorrorMakerMessageHandler` node to correct section in upgrade notes

commit a2d6fa4b53a5299f8a1661fa553f33654d0775c8
Author: Ismael Juma 
Date:   2016-02-22T07:31:31Z

Eliminate redundant collection traversals in `LogCleaner`

commit c0a927e83e6dfdf0cc7c52bec279d186c42d2ff7
Author: Ismael Juma 
Date:   2016-02-22T07:43:34Z

Don't mutate passed in `topicConfig` in `TopicConfigHandler`

commit 7846aef0b012f3f2f197a65679cb2f527b90cd0c
Author: Ismael Juma 
Date:   2016-02-22T07:57:31Z

Also test `UncleanLeaderElectionEnableProp` in `testFromPropsInvalid`

commit 45b0e1eb01fbd39549b635c0416e377481879e34
Author: Ismael Juma 
Date:   2016-02-22T08:05:45Z

Use `convertedPartitionData` when throttling clients

commit cd726334e88d4ff255c1432eb5c9d782f8090242
Author: Ismael Juma 
Date:   2016-02-22T08:36:33Z

KAFKA-3253: Skip duplicate message size check if there is no re-compression 
during log appending

commit c2ec781f689177f85b6517d971467afea7547318
Author: Ismael Juma 
Date:   2016-02-22T08:51:15Z

A few clean-ups in `ByteBufferMessageSet`

The most notable one is using `LongRef` instead of
`AtomicLong`, which meant that many other classes
also had to be updated.

commit 4955f953336e8218be861e86550e96440a4d10e7
Author: Ismael Juma 
Date:   2016-02-22T08:52:04Z

Use `TimestampType` and `ApiVersion` in `KafkaConfig` instead of `String`

commit 635178ad4cb647439d1133b0030394ec43e1658f
Author: Ismael Juma 
Date:   2016-02-22T08:53:29Z

Add `is` prefix to `magicValueInAllWrapperMessages` method

This makes it clear that it returns a `boolean`

commit 6f68a3b68d7e5c275f1606ac0424a9ca8ec300d3
Author: Ismael Juma 
Date:   2016-02-22T09:02:29Z

Improve `TimestampType` naming and switch static to instance method




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3259) KIP-31/KIP-32 clean-ups

2016-02-22 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3259:
--

 Summary: KIP-31/KIP-32 clean-ups
 Key: KAFKA-3259
 URL: https://issues.apache.org/jira/browse/KAFKA-3259
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.9.1.0


During review, I found a few things that could potentially be improved but were 
not important enough to block the PR from being merged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3258) BrokerTopicMetrics of deleted topics are never deleted

2016-02-22 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-3258:
-

 Summary: BrokerTopicMetrics of deleted topics are never deleted
 Key: KAFKA-3258
 URL: https://issues.apache.org/jira/browse/KAFKA-3258
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.1
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


Per-topic BrokerTopicMetrics generated by brokers are not deleted even when the 
topic is deleted. This shows misleading metrics in metrics reporters long after 
a topic is deleted and is also a resource leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)