Re: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers

2018-04-18 Thread Guozhang Wang
Thanks Boyang, +1 from me.

On Wed, Apr 18, 2018 at 4:43 PM, Boyang Chen  wrote:

> Hey guys,
>
>
> sorry I forgot to include a summary of this KIP.  Basically the idea is to
> separate the stream config for different internal consumers of Kafka
> Streams by supplying prefixes . We currently have
>
> (1) "main.consumer." for the main consumer
>
> (2) "restore.consumer." for the restore consumer
>
> (3) "global.consumer." for the global consumer
>
>
> Best,
>
> Boyang
>
> 
> From: Boyang Chen 
> Sent: Tuesday, April 17, 2018 12:18 PM
> To: dev@kafka.apache.org
> Subject: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers
>
>
> Hey friends.
>
>
> I would like to start a vote on KIP 276: add StreamsConfig prefix for
> different consumers.
>
> KIP: here 276+Add+StreamsConfig+prefix+for+different+consumers>
>
> Pull request: here
>
> Jira: here
>
>
> Let me know if you have questions.
>
>
> Thank you!
>
>
>  276+Add+StreamsConfig+prefix+for+different+consumers>
>
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk8 #2567

2018-04-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6802: Improved logging for missing topics during task 
assignment

--
[...truncated 417.50 KB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerif

Re: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers

2018-04-18 Thread Boyang Chen
Hey guys,


sorry I forgot to include a summary of this KIP.  Basically the idea is to 
separate the stream config for different internal consumers of Kafka Streams by 
supplying prefixes . We currently have

(1) "main.consumer." for the main consumer

(2) "restore.consumer." for the restore consumer

(3) "global.consumer." for the global consumer


Best,

Boyang


From: Boyang Chen 
Sent: Tuesday, April 17, 2018 12:18 PM
To: dev@kafka.apache.org
Subject: [VOTE] KIP-276 Add StreamsConfig prefix for different consumers


Hey friends.


I would like to start a vote on KIP 276: add StreamsConfig prefix for different 
consumers.

KIP: 
here

Pull request: here

Jira: here


Let me know if you have questions.


Thank you!






Build failed in Jenkins: kafka-trunk-jdk8 #2566

2018-04-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6775: Fix the issue of without init super class's (#4859)

--
[...truncated 415.46 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.ep

Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-18 Thread Rahul Singh
+1

On Apr 18, 2018, 5:12 PM -0500, Bill Bejeck , wrote:
> +1
>
> Thanks,
> Bill
>
> On Wed, Apr 18, 2018 at 6:07 PM, Edoardo Comar  wrote:
>
> > thanks Ismael
> >
> > +1 (non-binding)
> >
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> > From: Rajini Sivaram  > To: dev  > Date: 18/04/2018 22:05
> > Subject: Re: [VOTE] Kafka 2.0.0 in June 2018
> >
> >
> >
> > Hi Ismael, Thanks for following this up.
> >
> > +1 (binding)
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Wed, Apr 18, 2018 at 8:09 PM, Ted Yu  wrote:
> >
> > > +1
> > >  Original message From: Ismael Juma  > > Date: 4/18/18 11:35 AM (GMT-08:00) To: dev  > > Subject: [VOTE] Kafka 2.0.0 in June 2018
> > > Hi all,
> > >
> > > I started a discussion last year about bumping the version of the June
> > 2018
> > > release to 2.0.0[1]. To reiterate the reasons in the original post:
> > >
> > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> > version
> > > bump due to semantic versioning.
> > >
> > > 2. Take the chance to remove deprecated code that was deprecated prior
> > to
> > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> > > move faster.
> > >
> > > One concern that was raised is that we still do not have a rolling
> > upgrade
> > > path for the old ZK-based consumers. Since the Scala clients haven't
> > been
> > > updated in a long time (they don't support security or the latest
> > message
> > > format), users who need them can continue to use 1.1.0 with no loss of
> > > functionality.
> > >
> > > Since it's already mid-April and people seemed receptive during the
> > > discussion last year, I'm going straight to a vote, but we can discuss
> > more
> > > if needed (of course).
> > >
> > > Ismael
> > >
> > > [1]
> > >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.
> > apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93
> > &d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0
> > OaeRo7hgW4_tQ&m=JJocxOdv9dM3JXFwj2wYoHmVn9uDo5LSaIpu2MFZf3E&s=
> > cVke94qmd9h7EH2TscT6aiaT9bMkdiPc4Vr9AVLwxrQ&e=
> >
> > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > >
> >
> >
> >
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
> >


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-18 Thread Bill Bejeck
+1

Thanks,
Bill

On Wed, Apr 18, 2018 at 6:07 PM, Edoardo Comar  wrote:

> thanks Ismael
>
> +1 (non-binding)
>
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
> From:   Rajini Sivaram 
> To: dev 
> Date:   18/04/2018 22:05
> Subject:Re: [VOTE] Kafka 2.0.0 in June 2018
>
>
>
> Hi Ismael, Thanks for following this up.
>
> +1 (binding)
>
> Regards,
>
> Rajini
>
>
> On Wed, Apr 18, 2018 at 8:09 PM, Ted Yu  wrote:
>
> > +1
> >  Original message From: Ismael Juma 
> > Date: 4/18/18  11:35 AM  (GMT-08:00) To: dev 
> > Subject: [VOTE] Kafka 2.0.0 in June 2018
> > Hi all,
> >
> > I started a discussion last year about bumping the version of the June
> 2018
> > release to 2.0.0[1]. To reiterate the reasons in the original post:
> >
> > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> version
> > bump due to semantic versioning.
> >
> > 2. Take the chance to remove deprecated code that was deprecated prior
> to
> > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> > move faster.
> >
> > One concern that was raised is that we still do not have a rolling
> upgrade
> > path for the old ZK-based consumers. Since the Scala clients haven't
> been
> > updated in a long time (they don't support security or the latest
> message
> > format), users who need them can continue to use 1.1.0 with no loss of
> > functionality.
> >
> > Since it's already mid-April and people seemed receptive during the
> > discussion last year, I'm going straight to a vote, but we can discuss
> more
> > if needed (of course).
> >
> > Ismael
> >
> > [1]
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.
> apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93
> &d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0
> OaeRo7hgW4_tQ&m=JJocxOdv9dM3JXFwj2wYoHmVn9uDo5LSaIpu2MFZf3E&s=
> cVke94qmd9h7EH2TscT6aiaT9bMkdiPc4Vr9AVLwxrQ&e=
>
> > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> >
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-18 Thread Jakub Scholz
+1 (non-binding) ... I think that both points make lot of sense.

Jakub

On Thu, Apr 19, 2018 at 12:07 AM, Edoardo Comar  wrote:

> thanks Ismael
>
> +1 (non-binding)
>
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
> From:   Rajini Sivaram 
> To: dev 
> Date:   18/04/2018 22:05
> Subject:Re: [VOTE] Kafka 2.0.0 in June 2018
>
>
>
> Hi Ismael, Thanks for following this up.
>
> +1 (binding)
>
> Regards,
>
> Rajini
>
>
> On Wed, Apr 18, 2018 at 8:09 PM, Ted Yu  wrote:
>
> > +1
> >  Original message From: Ismael Juma 
> > Date: 4/18/18  11:35 AM  (GMT-08:00) To: dev 
> > Subject: [VOTE] Kafka 2.0.0 in June 2018
> > Hi all,
> >
> > I started a discussion last year about bumping the version of the June
> 2018
> > release to 2.0.0[1]. To reiterate the reasons in the original post:
> >
> > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major
> version
> > bump due to semantic versioning.
> >
> > 2. Take the chance to remove deprecated code that was deprecated prior
> to
> > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> > move faster.
> >
> > One concern that was raised is that we still do not have a rolling
> upgrade
> > path for the old ZK-based consumers. Since the Scala clients haven't
> been
> > updated in a long time (they don't support security or the latest
> message
> > format), users who need them can continue to use 1.1.0 with no loss of
> > functionality.
> >
> > Since it's already mid-April and people seemed receptive during the
> > discussion last year, I'm going straight to a vote, but we can discuss
> more
> > if needed (of course).
> >
> > Ismael
> >
> > [1]
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.
> apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93
> &d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0
> OaeRo7hgW4_tQ&m=JJocxOdv9dM3JXFwj2wYoHmVn9uDo5LSaIpu2MFZf3E&s=
> cVke94qmd9h7EH2TscT6aiaT9bMkdiPc4Vr9AVLwxrQ&e=
>
> > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> >
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


[jira] [Resolved] (KAFKA-6775) AbstractProcessor created in SimpleBenchmark should call super#init

2018-04-18 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6775.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> AbstractProcessor created in SimpleBenchmark should call super#init
> ---
>
> Key: KAFKA-6775
> URL: https://issues.apache.org/jira/browse/KAFKA-6775
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Ted Yu
>Assignee: Jimin Hsieh
>Priority: Minor
>  Labels: easy-fix, newbie
> Fix For: 1.2.0
>
>
> Around line 610:
> {code}
> return new AbstractProcessor() {
> @Override
> public void init(ProcessorContext context) {
> }
> {code}
> super.init should be called above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-18 Thread Edoardo Comar
thanks Ismael 

+1 (non-binding)

--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Rajini Sivaram 
To: dev 
Date:   18/04/2018 22:05
Subject:Re: [VOTE] Kafka 2.0.0 in June 2018



Hi Ismael, Thanks for following this up.

+1 (binding)

Regards,

Rajini


On Wed, Apr 18, 2018 at 8:09 PM, Ted Yu  wrote:

> +1
>  Original message From: Ismael Juma 
> Date: 4/18/18  11:35 AM  (GMT-08:00) To: dev 
> Subject: [VOTE] Kafka 2.0.0 in June 2018
> Hi all,
>
> I started a discussion last year about bumping the version of the June 
2018
> release to 2.0.0[1]. To reiterate the reasons in the original post:
>
> 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major 
version
> bump due to semantic versioning.
>
> 2. Take the chance to remove deprecated code that was deprecated prior 
to
> 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> move faster.
>
> One concern that was raised is that we still do not have a rolling 
upgrade
> path for the old ZK-based consumers. Since the Scala clients haven't 
been
> updated in a long time (they don't support security or the latest 
message
> format), users who need them can continue to use 1.1.0 with no loss of
> functionality.
>
> Since it's already mid-April and people seemed receptive during the
> discussion last year, I'm going straight to a vote, but we can discuss 
more
> if needed (of course).
>
> Ismael
>
> [1]
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=JJocxOdv9dM3JXFwj2wYoHmVn9uDo5LSaIpu2MFZf3E&s=cVke94qmd9h7EH2TscT6aiaT9bMkdiPc4Vr9AVLwxrQ&e=

> 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
>



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


[jira] [Created] (KAFKA-6803) Caching is turned off for stream-stream Join

2018-04-18 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-6803:


 Summary: Caching is turned off for stream-stream Join
 Key: KAFKA-6803
 URL: https://issues.apache.org/jira/browse/KAFKA-6803
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang


For other types joins: table-table and table-stream joins, caching is turned on 
in `Materialized` by default. However, in stream-stream join we hard coded 
internally to disable caching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-270 A Scala wrapper library for Kafka Streams

2018-04-18 Thread Debasish Ghosh
The voting process for KIP-270 A Scala wrapper library for Kafka Streams
has generated 5 binding and 4 non binding votes with no objections.

Thanks for all the reviews and feedbacks.

Regards.

On Mon, 16 Apr 2018 at 9:28 PM, Guozhang Wang  wrote:

> Thanks Debasish for the KIP!
>
> Will make another pass on the PR itself, but the KIP itself looks good. I'm
> +1 (binding).
>
> Guozhang
>
> On Mon, Apr 16, 2018 at 10:51 AM, John Roesler  wrote:
>
> > Thanks again for this effort. I'm +1 (non-binding).
> > -John
> >
> > On Mon, Apr 16, 2018 at 9:39 AM, Ismael Juma  wrote:
> >
> > > Thanks for the contribution. I haven't reviewed all the new APIs in
> > detail,
> > > but the general approach sounds good to me. +1 (binding).
> > >
> > > Ismael
> > >
> > > On Wed, Apr 11, 2018 at 3:09 AM, Debasish Ghosh <
> > > debasish.gh...@lightbend.com> wrote:
> > >
> > > > Hello everyone -
> > > >
> > > > This is in continuation to the discussion regarding
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 270+-+A+Scala+Wrapper+Library+for+Kafka+Streams,
> > > > which is a KIP for implementing a Scala wrapper library for Kafka
> > > Streams.
> > > >
> > > > We have had a PR (https://github.com/apache/kafka/pull/4756) for
> quite
> > > > some
> > > > time now and we have had lots of discussions and suggested
> > improvements.
> > > > Thanks to all who participated in the discussion and came up with all
> > the
> > > > suggestions for improvements.
> > > >
> > > > The purpose of this thread is to get an agreement on the
> implementation
> > > and
> > > > have it included as part of Kafka.
> > > >
> > > > Looking forward ..
> > > >
> > > > regards.
> > > >
> > > > --
> > > > Debasish Ghosh
> > > > Principal Engineer
> > > >
> > > > Twitter: @debasishg
> > > > Blog: http://debasishg.blogspot.com
> > > > Code: https://github.com/debasishg
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>
-- 
Debasish Ghosh
Principal Engineer

Twitter: @debasishg
Blog: http://debasishg.blogspot.com
Code: https://github.com/debasishg


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-18 Thread Rajini Sivaram
Hi Ismael, Thanks for following this up.

+1 (binding)

Regards,

Rajini


On Wed, Apr 18, 2018 at 8:09 PM, Ted Yu  wrote:

> +1
>  Original message From: Ismael Juma 
> Date: 4/18/18  11:35 AM  (GMT-08:00) To: dev 
> Subject: [VOTE] Kafka 2.0.0 in June 2018
> Hi all,
>
> I started a discussion last year about bumping the version of the June 2018
> release to 2.0.0[1]. To reiterate the reasons in the original post:
>
> 1. Adopt KIP-118 (Drop Support for Java 7), which requires a major version
> bump due to semantic versioning.
>
> 2. Take the chance to remove deprecated code that was deprecated prior to
> 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
> move faster.
>
> One concern that was raised is that we still do not have a rolling upgrade
> path for the old ZK-based consumers. Since the Scala clients haven't been
> updated in a long time (they don't support security or the latest message
> format), users who need them can continue to use 1.1.0 with no loss of
> functionality.
>
> Since it's already mid-April and people seemed receptive during the
> discussion last year, I'm going straight to a vote, but we can discuss more
> if needed (of course).
>
> Ismael
>
> [1]
> https://lists.apache.org/thread.html/dd9d3e31d7e9590c1f727ef5560c93
> 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
>


Build failed in Jenkins: kafka-trunk-jdk8 #2565

2018-04-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-3365; Add documentation method for protocol types and update doc

--
[...truncated 3.96 MB...]
org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldWriteCheckpointForPersistentLogEnabledStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldSuccessfullyReInitializeStateStores STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldSuccessfullyReInitializeStateStores PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionOnRegisterWhenStoreHasAlreadyBeenRegistered 
STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionOnRegisterWhenStoreHasAlreadyBeenRegistered 
PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldRestoreStoreWithSinglePutRestoreSpecification STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldRestoreStoreWithSinglePutRestoreSpecification PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldNotChangeOffsetsIfAckedOffsetsIsNull STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldNotChangeOffsetsIfAckedOffsetsIsNull PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionIfStoreNameIsSameAsCheckpointFileName STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldThrowIllegalArgumentExceptionIfStoreNameIsSameAsCheckpointFileName PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCloseStateManagerWithOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCloseStateManagerWithOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForTopic STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForTopic PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeProcessorTopology STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeProcessorTopology PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeContext STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeContext PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCheckpointOffsetsWhenStateIsFlushed STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldCheckpointOffsetsWhenStateIsFlushed PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeStateManager STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldInitializeStateManager PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForOtherTopic STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldProcessRecordsForOtherTopic PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler STARTED

org.apache.kafka.streams.processor.internals.GlobalStateTaskTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testThroughputMetrics STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testThroughputMetrics PASSED

org.apache.kafka.streams.processor.internals.StreamsMetricsImplTest > 
testLatencyMetrics STARTED

org.apache.kafka.streams.processor.internals.StreamsMetricsImpl

[jira] [Reopened] (KAFKA-2334) Prevent HW from going back during leader failover

2018-04-18 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reopened KAFKA-2334:


> Prevent HW from going back during leader failover 
> --
>
> Key: KAFKA-2334
> URL: https://issues.apache.org/jira/browse/KAFKA-2334
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: reliability
>
> Consider the following scenario:
> 0. Kafka use replication factor of 2, with broker B1 as the leader, and B2 as 
> the follower. 
> 1. A producer keep sending to Kafka with ack=-1.
> 2. A consumer repeat issuing ListOffset request to Kafka.
> And the following sequence:
> 0. B1 current log-end-offset (LEO) 0, HW-offset 0; and same with B2.
> 1. B1 receive a ProduceRequest of 100 messages, append to local log (LEO 
> becomes 100) and hold the request in purgatory.
> 2. B1 receive a FetchRequest starting at offset 0 from follower B2, and 
> returns the 100 messages.
> 3. B2 append its received message to local log (LEO becomes 100).
> 4. B1 receive another FetchRequest starting at offset 100 from B2, knowing 
> that B2's LEO has caught up to 100, and hence update its own HW, and 
> satisfying the ProduceRequest in purgatory, and sending the FetchResponse 
> with HW 100 back to B2 ASYNCHRONOUSLY.
> 5. B1 successfully sends the ProduceResponse to the producer, and then fails, 
> hence the FetchResponse did not reach B2, whose HW remains 0.
> From the consumer's point of view, it could first see the latest offset of 
> 100 (from B1), and then see the latest offset of 0 (from B2), and then the 
> latest offset gradually catch up to 100.
> This is because we use HW to guard the ListOffset and 
> Fetch-from-ordinary-consumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-275 - Indicate "isClosing" in the SinkTaskContext

2018-04-18 Thread Ted Yu
+1

On Wed, Apr 18, 2018 at 12:40 PM, Matt Farmer  wrote:

> Good afternoon/evening/morning all:
>
> I'd like to start voting on KIP-275: Indicate "isClosing" in the
> SinkTaskContext
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75977607
>
> I'm going to start preparing the patch we've been using internally for PR
> and get it up for review later this week.
>
> Thanks!
> Matt
>


[VOTE] KIP-275 - Indicate "isClosing" in the SinkTaskContext

2018-04-18 Thread Matt Farmer
Good afternoon/evening/morning all:

I'd like to start voting on KIP-275: Indicate "isClosing" in the
SinkTaskContext
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75977607

I'm going to start preparing the patch we've been using internally for PR
and get it up for review later this week.

Thanks!
Matt


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-18 Thread Ted Yu
+1
 Original message From: Ismael Juma  Date: 
4/18/18  11:35 AM  (GMT-08:00) To: dev  Subject: [VOTE] 
Kafka 2.0.0 in June 2018 
Hi all,

I started a discussion last year about bumping the version of the June 2018
release to 2.0.0[1]. To reiterate the reasons in the original post:

1. Adopt KIP-118 (Drop Support for Java 7), which requires a major version
bump due to semantic versioning.

2. Take the chance to remove deprecated code that was deprecated prior to
1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
move faster.

One concern that was raised is that we still do not have a rolling upgrade
path for the old ZK-based consumers. Since the Scala clients haven't been
updated in a long time (they don't support security or the latest message
format), users who need them can continue to use 1.1.0 with no loss of
functionality.

Since it's already mid-April and people seemed receptive during the
discussion last year, I'm going straight to a vote, but we can discuss more
if needed (of course).

Ismael

[1]
https://lists.apache.org/thread.html/dd9d3e31d7e9590c1f727ef5560c933281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E


[VOTE] Kafka 2.0.0 in June 2018

2018-04-18 Thread Ismael Juma
Hi all,

I started a discussion last year about bumping the version of the June 2018
release to 2.0.0[1]. To reiterate the reasons in the original post:

1. Adopt KIP-118 (Drop Support for Java 7), which requires a major version
bump due to semantic versioning.

2. Take the chance to remove deprecated code that was deprecated prior to
1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so that we can
move faster.

One concern that was raised is that we still do not have a rolling upgrade
path for the old ZK-based consumers. Since the Scala clients haven't been
updated in a long time (they don't support security or the latest message
format), users who need them can continue to use 1.1.0 with no loss of
functionality.

Since it's already mid-April and people seemed receptive during the
discussion last year, I'm going straight to a vote, but we can discuss more
if needed (of course).

Ismael

[1]
https://lists.apache.org/thread.html/dd9d3e31d7e9590c1f727ef5560c933281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E


Jenkins build is back to normal : kafka-trunk-jdk10 #32

2018-04-18 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6802) Improve logging when topics aren't known and assignments skipped

2018-04-18 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-6802:
--

 Summary: Improve logging when topics aren't known and assignments 
skipped
 Key: KAFKA-6802
 URL: https://issues.apache.org/jira/browse/KAFKA-6802
 Project: Kafka
  Issue Type: Improvement
Reporter: Bill Bejeck
Assignee: Bill Bejeck






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3365) Add a documentation field for types and update doc generation

2018-04-18 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-3365.

Resolution: Fixed

> Add a documentation field for types and update doc generation
> -
>
> Key: KAFKA-3365
> URL: https://issues.apache.org/jira/browse/KAFKA-3365
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Andras Beni
>Priority: Major
>
> Currently the type class does not allow a documentation field. This means we 
> can't auto generate a high level documentation summary for each type in the 
> protocol. Adding this field and updating the generated output would be 
> valuable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: [VOTE] KIP-235 Add DNS alias support for secured connection

2018-04-18 Thread Skrzypek, Jonathan
I have updated the KIP with more details, thanks.


Jonathan Skrzypek 

-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com] 
Sent: 16 April 2018 16:02
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-235 Add DNS alias support for secured connection

Looks good to me.

BTW KAFKA-6195 contains more technical details than the KIP. See if you can
enrich the Motivation section with some of the details.

Thanks

On Fri, Mar 23, 2018 at 12:05 PM, Skrzypek, Jonathan <
jonathan.skrzy...@gs.com> wrote:

> Hi,
>
> I would like to start a vote for KIP-235
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D&d=DwIBaQ&c=7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4&r=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E&m=z4z9og6UZJl3q8DYOzkpMV6iKc8Je2PFuG1jSKxWVcA&s=xDIuXjkyb0Tnz2Hwx8P5JzEK8B5NrFpF1U5uYs_rxck&e=
>  
> 235%3A+Add+DNS+alias+support+for+secured+connection
>
> This is a proposition to add an option for reverse dns lookup of
> bootstrap.servers hosts, allowing the use of dns aliases on clusters using
> SASL authentication.
>
>
>
>


[jira] [Created] (KAFKA-6801) Restrict Consumer to fetch data from secure port only, and deny from non-secure port.

2018-04-18 Thread VinayKumar (JIRA)
VinayKumar created KAFKA-6801:
-

 Summary: Restrict Consumer to fetch data from secure port only, 
and deny from non-secure port.
 Key: KAFKA-6801
 URL: https://issues.apache.org/jira/browse/KAFKA-6801
 Project: Kafka
  Issue Type: Task
  Components: admin, config, consumer, security
Affects Versions: 0.10.2.1
Reporter: VinayKumar


I have listeners configured with 2 ports as below:  (9092 -> Plaintext, 9092 -> 
SASL_PLAIN)
listeners=PLAINTEXT://:9092, SASL_PLAIN://:9093

For a topic, I want restrict Consumers to consume data from 9093 port only, and 
consuming data from 9092 port should be denied.

I've gone through ACL concept, but haven't seen an option to restrict Consumer 
pulling data from non-secure port (in this case- 9092)

Can someone please let me know if this is configurable ?
Can my requirement be fulfilled. Please provide necessary info.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.0-jdk7 #186

2018-04-18 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6772: Load credentials from ZK before accepting 
connections

--
[...truncated 375.81 KB...]

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout STARTED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage STARTED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo STARTED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker STARTED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.FetcherTest > testFetc

Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-04-18 Thread vito jeng
Matthias,

Thanks for the feedback!

> It's up to you to keep the details part in the KIP or not.

Got it!

> The (incomplete) question was, if we need `StateStoreFailException` or
> if existing `InvalidStateStoreException` could be used? Do you suggest
> that `InvalidStateStoreException` is not thrown at all anymore, but only
> the new sub-classes (just to get a better understanding).

Yes. I suggest that `InvalidStateStoreException` is not thrown at all
anymore,
but only new sub-classes.

> Not sure what this sentence means:
>> The internal exception will be wrapped as category exception finally.
> Can you elaborate?

For example, `StreamThreadStateStoreProvider#stores()` will throw
`StreamThreadNotRunningException`(internal exception).
And then the internal exception will be wrapped as
`StateStoreRetryableException` or `StateStoreFailException` during the
`KafkaStreams.store()` and throw to the user.


> Can you explain the purpose of the "internal exceptions". It's unclear
to me atm why they are introduced.

Hmmm...the purpose of the "internal exceptions" is to distinguish between
the different kinds of InvalidStateStoreException.
The original idea is that we can distinguish different state store
exception for
different handling.
But to be honest, I am not quite sure this is necessary. Maybe have
some change during implementation.

Does it make sense?




---
Vito

On Mon, Apr 16, 2018 at 5:59 PM, Matthias J. Sax 
wrote:

> Thanks for the update Vito!
>
> It's up to you to keep the details part in the KIP or not.
>
>
> The (incomplete) question was, if we need `StateStoreFailException` or
> if existing `InvalidStateStoreException` could be used? Do you suggest
> that `InvalidStateStoreException` is not thrown at all anymore, but only
> the new sub-classes (just to get a better understanding).
>
>
> Not sure what this sentence means:
>
> > The internal exception will be wrapped as category exception finally.
>
> Can you elaborate?
>
>
> Can you explain the purpose of the "internal exceptions". It's unclear
> to me atm why they are introduced.
>
>
> -Matthias
>
> On 4/10/18 12:33 AM, vito jeng wrote:
> > Matthias,
> >
> > Thanks for the review.
> > I reply separately in the following sections.
> >
> >
> > ---
> > Vito
> >
> > On Sun, Apr 8, 2018 at 1:30 PM, Matthias J. Sax 
> > wrote:
> >
> >> Thanks for updating the KIP and sorry for the long pause...
> >>
> >> Seems you did a very thorough investigation of the code. It's useful to
> >> understand what user facing interfaces are affected.
> >
> > (Some parts might be even too detailed for a KIP.)
> >>
> >
> > I also think too detailed. Especially the section `Changes in call
> trace`.
> > Do you think it should be removed?
> >
> >
> >>
> >> To summarize my current understanding of your KIP, the main change is to
> >> introduce new exceptions that extend `InvalidStateStoreException`.
> >>
> >
> > yep. Keep compatibility in this KIP is important things.
> > I think the best way is that all new exceptions extend from
> > `InvalidStateStoreException`.
> >
> >
> >>
> >> Some questions:
> >>
> >>  - Why do we need ```? Could `InvalidStateStoreException` be used for
> >> this purpose?
> >>
> >
> > Does this question miss some word?
> >
> >
> >>
> >>  - What the superclass of `StateStoreStreamThreadNotRunningException`
> >> is? Should it be `InvalidStateStoreException` or
> `StateStoreFailException`
> >> ?
> >>
> >>  - Is `StateStoreClosed` a fatal or retryable exception ?
> >>
> >>
> > I apologize for not well written parts. I tried to modify some code in
> the
> > recent period and modify the KIP.
> > The modification is now complete. Please look again.
> >
> >
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 2/21/18 5:15 PM, vito jeng wrote:
> >>> Matthias,
> >>>
> >>> Sorry for not response these days.
> >>> I just finished it. Please have a look. :)
> >>>
> >>>
> >>>
> >>> ---
> >>> Vito
> >>>
> >>> On Wed, Feb 14, 2018 at 5:45 AM, Matthias J. Sax <
> matth...@confluent.io>
> >>> wrote:
> >>>
>  Is there any update on this KIP?
> 
>  -Matthias
> 
>  On 1/3/18 12:59 AM, vito jeng wrote:
> > Matthias,
> >
> > Thank you for your response.
> >
> > I think you are right. We need to look at the state both of
> > KafkaStreams and StreamThread.
> >
> > After further understanding of KafkaStreams thread and state store,
> > I am currently rewriting the KIP.
> >
> >
> >
> >
> > ---
> > Vito
> >
> > On Fri, Dec 29, 2017 at 4:32 AM, Matthias J. Sax <
> >> matth...@confluent.io>
> > wrote:
> >
> >> Vito,
> >>
> >> Sorry for this late reply.
> >>
> >> There can be two cases:
> >>
> >>  - either a store got migrated way and thus, is not hosted an the
> >> application instance anymore,
> >>  - or, a store is hosted but the instance is in state rebalance
> >>
> >> For the first case, users need to rediscover the store. For the
> second
> >>>

Failure broker

2018-04-18 Thread natasha87_n
Hi,

i have a cluster of 5 broker. I created 10 topic with 5 replica factor and 5 
partitions. When one broker is shutdown, the cluster not working. Why?


Can you help me?

I await your response


Thanks!!

Re: [DISCUSS] KIP-255: OAuth Authentication via SASL/OAUTHBEARER

2018-04-18 Thread Rajini Sivaram
Hi Ron,

A few more suggestions and questions:


   1. The KIP says that various callback handlers and login have to be
   configured in order to use OAuth. Could we also say that a default
   implementation is included which is not suitable for production use, but
   this would work out-of-the-box with no additional configuration required
   for callback handlers and login class? So the default callback handler and
   login class that we would use in SaslChannelBuilder for OAuth, if the
   config is not overridden would be the classes that you are including  here (
   OAuthBearerUnsecuredValidatorCallbackHandler etc.)
   2. Following on from 1) above, I think the default  implementation
   classes can be moved to the internal package, since they no longer need
   to be part of the public API, if we just choose them automatically by
   default. I think the only classes that really need to part of the public
   API are OAuthBearerToken, OAuthBearerLoginModule, OAuthBearerLoginCallback
   and OAuthBearerValidatorCallback. It is hard to tell from the current
   package layout, but the packages that are public currently are
   org.apache.kafka.common.security.auth,
org.apache.kafka.common.security.plain
   and org.apache.kafka.common.security.scram. Callback handlers and login
   class are not public for the other mechanisms.
   3. Can we rename OAuthBearerLoginCallback to OAuthBearerTokenCallback or
   something along those lines since it is used by SaslClient as well as the
   login module?
   4. We use `Ms` as the suffix for fields and methods that refer to
   milliseconds. So, perhaps in OAuthBearerToken, we could have lifetimeMs
   and startTimeMs? I thought initially that lifetime was a time interval
   rather than the wall-clock time. Something like expiryTimeMs may be less
   confusing. But that could just be me (and it is fine to use the
   terminology used in OAuth RFCs, so I will leave it up to you).
   5. I am wondering whether it would be better to define refresh
   parameters as properties in SaslConfigs rather than in the JAAS config.
   We already have similar properties defined for Kerberos, but with kerberos
   prefix. I wonder if defining the properties in a mechanism-independent way
   (e.g. sasl.login.refresh.window.factor) could work with different
   mechanisms? We could use it initially for just OAuth, but if we did unify
   refreshing logins in future, we could deprecate the current
   Kerberos-specific properties and have just one set that works for any
   mechanism that uses token refresh. What do you think?

Thanks,

Rajini


On Thu, Mar 29, 2018 at 11:39 PM, Rajini Sivaram 
wrote:

> Hi Ron,
>
> Thanks for the updates. I had a quick look and it is looking good.
>
> I have updated KIP-86 and the associated PR to with a new config
> sasl.login.callback.handler.class that matches what you are using in this
> KIP.
>
> On Thu, Mar 29, 2018 at 6:27 AM, Ron Dagostino  wrote:
>
>> Hi Rajini.  I have adjusted the KIP to use callbacks and callback handlers
>>
>> throughout.  I also clarified that production implementations of the
>> retrieval and validation callback handlers will require the use of an open
>> source JWT library, and the unsecured implementations are as far as
>> SASL/OAUTHBEARER will go out-of-the-box. Your suggestions, plus this
>> clarification, has allowed much of the code to move into the ".internal"
>> java package; the public-facing API now consists of just 8 Java classes, 1
>> Java interface, and a set of configuration requirements.  I also added a
>> section outlinng those configuration requirements since they are extensive
>> (not onerously so -- just not something one can easily remember).
>>
>> Ron
>>
>> On Tue, Mar 13, 2018 at 11:44 AM, Rajini Sivaram > >
>> wrote:
>>
>> > Hi Ron,
>> >
>> > Thanks for the response. All sound good, I think the only outstanding
>> > question is around callbacks vs classes provided through the login
>> context.
>> > As you have pointed out, there are advantages of both approaches. Even
>> > though my preference is for callbacks, it is not a blocker since the
>> > current approach works fine too. I will make the case for callbacks
>> anyway,
>> > using OAuthTokenValidator as an example:
>> >
>> >
>> >- As you mentioned, the main advantage of using callbacks is
>> >consistency. It is the standard plug-in mechanism for SASL
>> > implementations
>> >in Java and keeps code consistent with built-in mechanisms like
>> > Kerberos as
>> >well as our own implementations like PLAIN and SCRAM.
>> >- With the current approach, there are two classes
>> OAuthTokenValidator
>> >and a default implementation OAuthBearerUnsecuredJwtValidator. I was
>> >thinking that we would have a public callback class
>> > OAuthTokenValidatorCallback
>> >instead and a default callback handler
>> >OAuthBearerUnsecuredJwtValidatorCallbackHandler. So it would be two
>> >classes either way?
>> >- JAAS config is very opaque, we

Build failed in Jenkins: kafka-trunk-jdk10 #31

2018-04-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6772: Load credentials from ZK before accepting connections

--
[...truncated 1.48 MB...]
kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThr

[jira] [Created] (KAFKA-6800) Update documentation for SASL/PLAIN and SCRAM to use callbacks

2018-04-18 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-6800:
-

 Summary: Update documentation for SASL/PLAIN and SCRAM to use 
callbacks
 Key: KAFKA-6800
 URL: https://issues.apache.org/jira/browse/KAFKA-6800
 Project: Kafka
  Issue Type: Task
  Components: documentation, security
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 1.2.0


Refer to custom callbacks introduced in KIP-86 in SASL documentation instead of 
replacing login module. Also include `org.apache.kafka.common.security.plain` 
and `org.apache.kafka.common.security.scram` in javadocs since these are now 
part of the public API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1716) hang during shutdown of ZookeeperConsumerConnector

2018-04-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1716.
--
Resolution: Auto Closed

Closing inactive issue. The Scala consumers have been deprecated and no further 
work is planned, please upgrade to the Java consumer whenever possible.

> hang during shutdown of ZookeeperConsumerConnector
> --
>
> Key: KAFKA-1716
> URL: https://issues.apache.org/jira/browse/KAFKA-1716
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Sean Fay
>Assignee: Neha Narkhede
>Priority: Major
> Attachments: after-shutdown.log, before-shutdown.log, 
> kafka-shutdown-stuck.log
>
>
> It appears to be possible for {{ZookeeperConsumerConnector.shutdown()}} to 
> wedge in the case that some consumer fetcher threads receive messages during 
> the shutdown process.
> Shutdown thread:
> {code}-- Parking to wait for: 
> java/util/concurrent/CountDownLatch$Sync@0x2aaaf3ef06d0
> at jrockit/vm/Locks.park0(J)V(Native Method)
> at jrockit/vm/Locks.park(Locks.java:2230)
> at sun/misc/Unsafe.park(ZJ)V(Native Method)
> at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
> at java/util/concurrent/CountDownLatch.await(CountDownLatch.java:207)
> at kafka/utils/ShutdownableThread.shutdown(ShutdownableThread.scala:36)
> at 
> kafka/server/AbstractFetcherThread.shutdown(AbstractFetcherThread.scala:71)
> at 
> kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:121)
> at 
> kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:120)
> at 
> scala/collection/TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at 
> scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala/collection/mutable/HashTable$class.foreachEntry(HashTable.scala:226)
> at scala/collection/mutable/HashMap.foreachEntry(HashMap.scala:39)
> at scala/collection/mutable/HashMap.foreach(HashMap.scala:98)
> at 
> scala/collection/TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka/server/AbstractFetcherManager.closeAllFetchers(AbstractFetcherManager.scala:120)
> ^-- Holding lock: java/lang/Object@0x2aaaebcc7318[thin lock]
> at 
> kafka/consumer/ConsumerFetcherManager.stopConnections(ConsumerFetcherManager.scala:148)
> at 
> kafka/consumer/ZookeeperConsumerConnector.liftedTree1$1(ZookeeperConsumerConnector.scala:171)
> at 
> kafka/consumer/ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:167){code}
> ConsumerFetcherThread:
> {code}-- Parking to wait for: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0x2aaaebcc7568
> at jrockit/vm/Locks.park0(J)V(Native Method)
> at jrockit/vm/Locks.park(Locks.java:2230)
> at sun/misc/Unsafe.park(ZJ)V(Native Method)
> at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> java/util/concurrent/LinkedBlockingQueue.put(LinkedBlockingQueue.java:306)
> at kafka/consumer/PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at 
> kafka/consumer/ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:49)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:130)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:111)
> at scala/collection/immutable/HashMap$HashMap1.foreach(HashMap.scala:224)
> at 
> scala/collection/immutable/HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$mcV$sp(AbstractFetcherThread.scala:111)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
> at kafka/utils/Utils$.inLock(Utils.scala:538)
> at 

Jenkins build is back to normal : kafka-trunk-jdk7 #3351

2018-04-18 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-5287) Messages getting repeated in kafka

2018-04-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5287.
--
Resolution: Cannot Reproduce

Closing inactive issue. Please reopen with more details., if you think the 
issue still exists 

> Messages getting repeated in kafka
> --
>
> Key: KAFKA-5287
> URL: https://issues.apache.org/jira/browse/KAFKA-5287
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.0
> Environment: Hardware specification(8 Cores , 16 GB RAM,1 TB Harddisk)
>Reporter: Abhimanyu Nagrath
>Priority: Major
>
> I have a topic with 200 partition in which messages contains the total of 3 
> Million messages. It took 5 days to completely process all the messages and 
> as soon as message got processed i.e. Kafka-consumer-groups.sh showed 0 lag 
> in all the partition of the topic I stopped the consumer .but after 6 hrs 
> again it was showing the lag of 2 million message which I found that were 
> duplicate messages. This thing is happening very frequently. My offsets are 
> stored on Kafka broker itself. 
> My server configuration is:
> broker.id=1
> delete.topic.enable=true
> #listeners=PLAINTEXT://:9092
> #advertised.listeners=PLAINTEXT://your.host.name:9092
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/kafka/data/logs
> num.partitions=1
> num.recovery.threads.per.data.dir=5
> log.flush.interval.messages=1
> #log.flush.interval.ms=1000
> log.retention.hours=480
> log.retention.bytes=1073741824
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=30
> zookeeper.connect=:2181
> zookeeper.connection.timeout.ms=6000
> Is there in the configuration that I am missing? 
> Any help is appreciated 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6772) Broker should load credentials from ZK before requests are allowed

2018-04-18 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6772.
---
Resolution: Fixed
  Reviewer: Jun Rao

> Broker should load credentials from ZK before requests are allowed
> --
>
> Key: KAFKA-6772
> URL: https://issues.apache.org/jira/browse/KAFKA-6772
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 1.1.0, 1.0.1
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 1.0.2, 1.2.0, 1.1.1
>
>
> It is currently possible for clients to get an AuthenticationException during 
> start-up if the brokers have not yet loaded credentials from ZK. This 
> definitely affects SCRAM, but it may also affect delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5262) Can't find some consumer group information

2018-04-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5262.
--
Resolution: Cannot Reproduce

Closing inactive issue. Please reopen with more details., if you think the 
issue still exists 


> Can't  find  some  consumer group   information
> ---
>
> Key: KAFKA-5262
> URL: https://issues.apache.org/jira/browse/KAFKA-5262
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.10.1.0
>Reporter: miaozhiyong
>Priority: Major
>
> The  kafka client use  broker to connect with kafka ,  i had install  two 
> kafka-manager.  the consumer don't display in the kafka-manager .and   can''t 
>  work with   the commmand line:
> kafka-consumer-groups.sh --new-consumer  --bootstrap-serveer
> but the client is ok . where is consumer store the lag?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3476) -Xloggc is not recognised by IBM java

2018-04-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3476.
--
Resolution: Won't Fix

Closing this as we can export the GC values and performance opts.  Please 
reopen if you think otherwise

>  -Xloggc is not recognised by IBM java
> --
>
> Key: KAFKA-3476
> URL: https://issues.apache.org/jira/browse/KAFKA-3476
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, tools
>Affects Versions: 0.9.0.0
>Reporter: Khirod Patra
>Priority: Major
>
> Getting below error on AIX server.
> NOTE : java version is :
> --
> java version "1.8.0"
> Java(TM) SE Runtime Environment (build pap6480-20150129_02)
> IBM J9 VM (build 2.8, JRE 1.8.0 AIX ppc64-64 Compressed References 
> 20150116_231420 (JIT enabled, AOT enabled)
> J9VM - R28_Java8_GA_20150116_2030_B231420
> JIT  - tr.r14.java_20150109_82886.02
> GC   - R28_Java8_GA_20150116_2030_B231420_CMPRSS
> J9CL - 20150116_231420)
> JCL - 20150123_01 based on Oracle jdk8u31-b12
> Error :
> ---
> kafka-run-class.sh -name zookeeper -loggc  
> org.apache.zookeeper.server.quorum.QuorumPeerMain 
> ../config/zookeeper.properties
> 
> http://www.ibm.com/j9/verbosegc"; 
> version="R28_Java8_GA_20150116_2030_B231420_CMPRSS">
> JVMJ9VM007E Command-line option unrecognised: 
> -Xloggc:/home/test_user/containers/kafka_2.11-0.9.0.0/bin/../logs/zookeeper-gc.log
> 
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk10 #30

2018-04-18 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6107) SCRAM user add appears to fail if Kafka has never been started

2018-04-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6107.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

This was fixed in KafkaZkClient changes work.

> SCRAM user add appears to fail if Kafka has never been started
> --
>
> Key: KAFKA-6107
> URL: https://issues.apache.org/jira/browse/KAFKA-6107
> Project: Kafka
>  Issue Type: Bug
>  Components: tools, zkclient
>Affects Versions: 0.11.0.0
>Reporter: Dustin Cote
>Priority: Minor
> Fix For: 1.1.0
>
>
> When trying to add a SCRAM user in ZooKeeper without having ever starting 
> Kafka, the kafka-configs tool does not handle it well. This is a common use 
> case because starting a new cluster where you want SCRAM for inter broker 
> communication would generally result in seeing this problem. Today, the 
> workaround is to start Kafka, add the user, then restart Kafka. Here's how to 
> reproduce:
> 1) Start ZooKeeper
> 2) Run 
> {code}
> bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 
> 'SCRAM-SHA-256=[iterations=8192,password=broker_pwd],SCRAM-SHA-512=[password=broker_pwd]'
>  --entity-type users --entity-name broker
> {code}
> This will result in:
> {code}
> bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 
> 'SCRAM-SHA-256=[iterations=8192,password=broker_pwd],SCRAM-SHA-512=[password=broker_pwd]'
>  --entity-type users --entity-name broker
> Error while executing config command 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /config/changes/config_change_
> org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /config/changes/config_change_
>   at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1001)
>   at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:528)
>   at 
> org.I0Itec.zkclient.ZkClient.createPersistentSequential(ZkClient.java:444)
>   at kafka.utils.ZkPath.createPersistentSequential(ZkUtils.scala:1045)
>   at kafka.utils.ZkUtils.createSequentialPersistentPath(ZkUtils.scala:527)
>   at 
> kafka.admin.AdminUtils$.kafka$admin$AdminUtils$$changeEntityConfig(AdminUtils.scala:600)
>   at 
> kafka.admin.AdminUtils$.changeUserOrUserClientIdConfig(AdminUtils.scala:551)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:72)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:101)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:68)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for /config/changes/config_change_
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
>   at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:100)
>   at org.I0Itec.zkclient.ZkClient$3.call(ZkClient.java:531)
>   at org.I0Itec.zkclient.ZkClient$3.call(ZkClient.java:528)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:991)
>   ... 11 more
> {code}
> The command doesn't appear to fail but it does throw an exception. The return 
> code of the script is still 0 and the user is created in ZooKeeper but this 
> should be cleaned up since it's misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)