[GitHub] kafka pull request #3353: KAFKA-5455 - Better Javadocs for the transactional...

2017-06-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3353


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-17 Thread Viktor Somogyi
Got it, thanks Hans!

On Sat, Jun 17, 2017 at 11:11 AM, Hans Jespersen  wrote:

>
> Offset commit is something that is done in the act of consuming (or
> reading) Kafka messages.
> Yes technically it is a write to the Kafka consumer offset topic but it's
> much easier for
> administers to think of ACLs in terms of whether the user is allowed to
> write (Produce) or
> read (Consume) messages and not the lower level semantics that are that
> consuming is actually
> reading AND writing (albeit only to the offset topic).
>
> -hans
>
>
>
>
> > On Jun 17, 2017, at 10:59 AM, Viktor Somogyi <
> viktor.somo...@cloudera.com> wrote:
> >
> > Hi Vahid,
> >
> > +1 for OffsetFetch from me too.
> >
> > I also wanted to ask the strangeness of the permissions, like why is
> > OffsetCommit a Read operation instead of Write which would intuitively
> make
> > more sense to me. Perhaps any expert could shed some light on this? :)
> >
> > Viktor
> >
> > On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com > wrote:
> >
> >> Hi Michal,
> >>
> >> Thanks a lot for your feedback.
> >>
> >> Your statement about Heartbeat is fair and makes sense. I'll update the
> >> KIP accordingly.
> >>
> >> --Vahid
> >>
> >>
> >>
> >>
> >> From:Michal Borowiecki 
> >> To:us...@kafka.apache.org, Vahid S Hashemian <
> >> vahidhashem...@us.ibm.com>, dev@kafka.apache.org
> >> Date:06/13/2017 01:35 AM
> >> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> >> Permission of OffsetFetch
> >> --
> >>
> >>
> >>
> >> Hi Vahid,
> >>
> >> +1 wrt OffsetFetch.
> >>
> >> The "Additional Food for Thought" mentions Heartbeat as a non-mutating
> >> action. I don't think that's true as the GroupCoordinator updates the
> >> latestHeartbeat field for the member and adds a new object to the
> >> heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
> >> called from handleHeartbeat()
> >>
> >> NB added dev mailing list back into CC as it seems to have been lost
> along
> >> the way.
> >>
> >> Cheers,
> >>
> >> Michał
> >>
> >>
> >> On 12/06/17 18:47, Vahid S Hashemian wrote:
> >> Hi Colin,
> >>
> >> Thanks for the feedback.
> >>
> >> To be honest, I'm not sure either why Read was selected instead of Write
> >> for mutating APIs in the initial design (I asked Ewen on the
> corresponding
> >> JIRA and he seemed unsure too).
> >> Perhaps someone who was involved in the design can clarify.
> >>
> >> Thanks.
> >> --Vahid
> >>
> >>
> >>
> >>
> >> From:   Colin McCabe *>*
> >
> >> To: *us...@kafka.apache.org * <
> us...@kafka.apache.org >
> >> Date:   06/12/2017 10:11 AM
> >> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> >> Permission of OffsetFetch
> >>
> >>
> >>
> >> Hi Vahid,
> >>
> >> I think you make a valid point that the ACLs controlling group
> >> operations are not very intuitive.
> >>
> >> This is probably a dumb question, but why are we using Read for mutating
> >> APIs?  Shouldn't that be Write?
> >>
> >> The distinction between Describe and Read makes a lot of sense for
> >> Topics.  A group isn't really something that you "read" from in the same
> >> way as a topic, so it always felt kind of weird there.
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:
> >>
> >> Hi all,
> >>
> >> I'm resending my earlier note hoping it would spark some conversation
> >> this
> >> time around :)
> >>
> >> Thanks.
> >> --Vahid
> >>
> >>
> >>
> >>
> >> From:   "Vahid S Hashemian" *>*
> >> >
> >> To: dev *>* <
> dev@kafka.apache.org >, "Kafka User"
> >>
> >> *>* <
> us...@kafka.apache.org >
> >>
> >> Date:   05/30/2017 08:33 AM
> >> Subject:KIP-163: Lower the Minimum Required ACL Permission of
> >> OffsetFetch
> >>
> >>
> >>
> >> Hi,
> >>
> >> I started a new KIP to improve the minimum required ACL permissions of
> >> some of the APIs:
> >>
> >>
> >>
> >> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch* <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*>
> >>  163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch>>
> >>
> >>
> >>
> >> 

Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-17 Thread Hans Jespersen

Offset commit is something that is done in the act of consuming (or reading) 
Kafka messages. 
Yes technically it is a write to the Kafka consumer offset topic but it's much 
easier for 
administers to think of ACLs in terms of whether the user is allowed to write 
(Produce) or 
read (Consume) messages and not the lower level semantics that are that 
consuming is actually
reading AND writing (albeit only to the offset topic).

-hans




> On Jun 17, 2017, at 10:59 AM, Viktor Somogyi  
> wrote:
> 
> Hi Vahid,
> 
> +1 for OffsetFetch from me too.
> 
> I also wanted to ask the strangeness of the permissions, like why is
> OffsetCommit a Read operation instead of Write which would intuitively make
> more sense to me. Perhaps any expert could shed some light on this? :)
> 
> Viktor
> 
> On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com > wrote:
> 
>> Hi Michal,
>> 
>> Thanks a lot for your feedback.
>> 
>> Your statement about Heartbeat is fair and makes sense. I'll update the
>> KIP accordingly.
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:Michal Borowiecki 
>> To:us...@kafka.apache.org, Vahid S Hashemian <
>> vahidhashem...@us.ibm.com>, dev@kafka.apache.org
>> Date:06/13/2017 01:35 AM
>> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
>> Permission of OffsetFetch
>> --
>> 
>> 
>> 
>> Hi Vahid,
>> 
>> +1 wrt OffsetFetch.
>> 
>> The "Additional Food for Thought" mentions Heartbeat as a non-mutating
>> action. I don't think that's true as the GroupCoordinator updates the
>> latestHeartbeat field for the member and adds a new object to the
>> heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
>> called from handleHeartbeat()
>> 
>> NB added dev mailing list back into CC as it seems to have been lost along
>> the way.
>> 
>> Cheers,
>> 
>> Michał
>> 
>> 
>> On 12/06/17 18:47, Vahid S Hashemian wrote:
>> Hi Colin,
>> 
>> Thanks for the feedback.
>> 
>> To be honest, I'm not sure either why Read was selected instead of Write
>> for mutating APIs in the initial design (I asked Ewen on the corresponding
>> JIRA and he seemed unsure too).
>> Perhaps someone who was involved in the design can clarify.
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   Colin McCabe *>* 
>> >
>> To: *us...@kafka.apache.org * 
>> >
>> Date:   06/12/2017 10:11 AM
>> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
>> Permission of OffsetFetch
>> 
>> 
>> 
>> Hi Vahid,
>> 
>> I think you make a valid point that the ACLs controlling group
>> operations are not very intuitive.
>> 
>> This is probably a dumb question, but why are we using Read for mutating
>> APIs?  Shouldn't that be Write?
>> 
>> The distinction between Describe and Read makes a lot of sense for
>> Topics.  A group isn't really something that you "read" from in the same
>> way as a topic, so it always felt kind of weird there.
>> 
>> best,
>> Colin
>> 
>> 
>> On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:
>> 
>> Hi all,
>> 
>> I'm resending my earlier note hoping it would spark some conversation
>> this
>> time around :)
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   "Vahid S Hashemian" *> >*
>> >
>> To: dev *>* 
>> >, "Kafka User"
>> 
>> *>* 
>> >
>> 
>> Date:   05/30/2017 08:33 AM
>> Subject:KIP-163: Lower the Minimum Required ACL Permission of
>> OffsetFetch
>> 
>> 
>> 
>> Hi,
>> 
>> I started a new KIP to improve the minimum required ACL permissions of
>> some of the APIs:
>> 
>> 
>> 
>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*
>>  
>> 
>> >  
>> >
>> 
>> 
>> 
>> The KIP is to address KAFKA-4585.
>> 
>> Feedback and suggestions are welcome!
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> > *Michal Borowiecki*
>> *Senior Software Engineer L4*
>> *T: * +44 208 742 1600 <(208)%20742-1600>

Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-06-17 Thread Viktor Somogyi
Hi Vahid,

+1 for OffsetFetch from me too.

I also wanted to ask the strangeness of the permissions, like why is
OffsetCommit a Read operation instead of Write which would intuitively make
more sense to me. Perhaps any expert could shed some light on this? :)

Viktor

On Tue, Jun 13, 2017 at 2:38 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Michal,
>
> Thanks a lot for your feedback.
>
> Your statement about Heartbeat is fair and makes sense. I'll update the
> KIP accordingly.
>
> --Vahid
>
>
>
>
> From:Michal Borowiecki 
> To:us...@kafka.apache.org, Vahid S Hashemian <
> vahidhashem...@us.ibm.com>, dev@kafka.apache.org
> Date:06/13/2017 01:35 AM
> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> Permission of OffsetFetch
> --
>
>
>
> Hi Vahid,
>
> +1 wrt OffsetFetch.
>
> The "Additional Food for Thought" mentions Heartbeat as a non-mutating
> action. I don't think that's true as the GroupCoordinator updates the
> latestHeartbeat field for the member and adds a new object to the
> heartbeatPurgatory, see completeAndScheduleNextHeartbeatExpiration()
> called from handleHeartbeat()
>
> NB added dev mailing list back into CC as it seems to have been lost along
> the way.
>
> Cheers,
>
> Michał
>
>
> On 12/06/17 18:47, Vahid S Hashemian wrote:
> Hi Colin,
>
> Thanks for the feedback.
>
> To be honest, I'm not sure either why Read was selected instead of Write
> for mutating APIs in the initial design (I asked Ewen on the corresponding
> JIRA and he seemed unsure too).
> Perhaps someone who was involved in the design can clarify.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Colin McCabe ** 
> To: *us...@kafka.apache.org* 
> Date:   06/12/2017 10:11 AM
> Subject:Re: [DISCUSS] KIP-163: Lower the Minimum Required ACL
> Permission of OffsetFetch
>
>
>
> Hi Vahid,
>
> I think you make a valid point that the ACLs controlling group
> operations are not very intuitive.
>
> This is probably a dumb question, but why are we using Read for mutating
> APIs?  Shouldn't that be Write?
>
> The distinction between Describe and Read makes a lot of sense for
> Topics.  A group isn't really something that you "read" from in the same
> way as a topic, so it always felt kind of weird there.
>
> best,
> Colin
>
>
> On Thu, Jun 8, 2017, at 11:29, Vahid S Hashemian wrote:
>
> Hi all,
>
> I'm resending my earlier note hoping it would spark some conversation
> this
> time around :)
>
> Thanks.
> --Vahid
>
>
>
>
> From:   "Vahid S Hashemian" **
> 
> To: dev ** , "Kafka User"
>
> ** 
>
> Date:   05/30/2017 08:33 AM
> Subject:KIP-163: Lower the Minimum Required ACL Permission of
> OffsetFetch
>
>
>
> Hi,
>
> I started a new KIP to improve the minimum required ACL permissions of
> some of the APIs:
>
>
>
> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch*
> 
>
>
>
> The KIP is to address KAFKA-4585.
>
> Feedback and suggestions are welcome!
>
> Thanks.
> --Vahid
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>  *Michal Borowiecki*
> *Senior Software Engineer L4*
> *T: * +44 208 742 1600 <(208)%20742-1600>
> +44 203 249 8448 <(203)%20249-8448>
>
> *E: * *michal.borowie...@openbet.com* 
> *W: * *www.openbet.com* 
> *OpenBet Ltd*
> Chiswick Park Building 9
> 566 Chiswick High Rd
> London
> W4 5XT
> UK
> 
> This message is confidential and intended only for the addressee. If you
> have received this message in error, please immediately notify the
> *postmas...@openbet.com* and delete it from your
> system as well as any copies. The content of e-mails as well as traffic
> data may be monitored by OpenBet for employment and security purposes. To
> protect the environment please do not print this e-mail unless necessary.
> OpenBet Ltd. Registered Office: Chiswick Park Building 9, 566 Chiswick High
> Road, London, W4 5XT, United Kingdom. A company registered in England and
> Wales. Registered no. 3134634. VAT no. GB927523612
>
>
>
>


[GitHub] kafka pull request #3361: KAFKA-5435: Improve producer state loading after f...

2017-06-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3361


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5465) FetchResponse v0 does not return any messages when max_bytes smaller than v2 message set

2017-06-17 Thread Dana Powers (JIRA)
Dana Powers created KAFKA-5465:
--

 Summary: FetchResponse v0 does not return any messages when 
max_bytes smaller than v2 message set 
 Key: KAFKA-5465
 URL: https://issues.apache.org/jira/browse/KAFKA-5465
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Dana Powers
Priority: Minor


In prior releases, when consuming uncompressed messages FetchResponse v0 will 
returns a message if it is smaller than the max_bytes sent in the FetchRequest. 
In 0.11.0.0 RC0, when messages are stored as v2 internally, the response will 
be empty unless the full message set is smaller than max_bytes. In some 
configurations, this may cause some old consumers to get stuck on large 
messages where previously they were able to make progress one message at a time.

For example, when I produce 10 5KB messages using ProduceRequest v0 and then 
attempt FetchRequest v0 with partition max bytes = 6KB (larger than a single 
message but smaller than all 10 messages together), I get an empty message set 
from 0.11.0.0. Previous brokers would have returned a single message.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk7 #2422

2017-06-17 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #1724

2017-06-17 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3364: HOTFIX: Improve error handling for ACL requests

2017-06-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3364


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2421

2017-06-17 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5031; Follow-up with small cleanups/improvements

--
[...truncated 999.36 KB...]
kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch STARTED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround PASSED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout STARTED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout PASSED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord STARTED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots STARTED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED


[GitHub] kafka pull request #3363: MINOR: Some small cleanups/improvements to KAFKA-5...

2017-06-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3363


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1723

2017-06-17 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Remove unused logger

[ismael] KAFKA-5463; Controller incorrectly logs rack information when new

--
[...truncated 973.42 KB...]
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(ThrottledRequestReaper-Produce, metrics-meter-tick-thread-2, 
SessionTracker, main, Signal Dispatcher, Reference Handler, 
ExpirationReaper-0-Produce, ExpirationReaper-0-DeleteRecords, Test 
worker-SendThread(127.0.0.1:54970), ThrottledRequestReaper-Fetch, 
ZkClient-EventThread-40834-127.0.0.1:54970, ThrottledRequestReaper-Request, 
kafka-request-handler-2, /0:0:0:0:0:0:0:1:52148 to /0:0:0:0:0:0:0:1:39129 
workers Thread 2, Test worker, SyncThread:0, ReplicaFetcherThread-0-1, 
ProcessThread(sid:0 cport:54970):, /0:0:0:0:0:0:0:1:52148 to 
/0:0:0:0:0:0:0:1:39129 workers Thread 3, NIOServerCxn.Factory:/127.0.0.1:0, 
Test worker-EventThread, ExpirationReaper-0-Fetch, Finalizer, 
kafka-coordinator-heartbeat-thread | group1, ForkJoinPool-1-worker-7, 
metrics-meter-tick-thread-1)

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > classMethod STARTED

kafka.security.auth.ZkAuthorizationTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(ThrottledRequestReaper-Produce, metrics-meter-tick-thread-2, 
SessionTracker, main, Signal Dispatcher, Reference Handler, 
ExpirationReaper-0-Produce, ExpirationReaper-0-DeleteRecords, Test 
worker-SendThread(127.0.0.1:54970), ThrottledRequestReaper-Fetch, 
ZkClient-EventThread-40834-127.0.0.1:54970, ThrottledRequestReaper-Request, 

[jira] [Resolved] (KAFKA-5463) Controller incorrectly logs rack information when new brokers are added

2017-06-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5463.

   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 3358
[https://github.com/apache/kafka/pull/3358]

> Controller incorrectly logs rack information when new brokers are added
> ---
>
> Key: KAFKA-5463
> URL: https://issues.apache.org/jira/browse/KAFKA-5463
> Project: Kafka
>  Issue Type: Bug
>  Components: config, controller
>Affects Versions: 0.10.2.0, 0.11.0.0, 0.10.2.1
> Environment: Ubuntu Trusty (14.04.5), Oracle JDK 8
>Reporter: Jeff Chao
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> When a new broker is added, on an {{UpdateMetadata request}}, rack 
> information won't be present in the state-change log even if configured.
> Example:
> {{pri=TRACE t=Controller-1-to-broker-0-send-thread at=logger Controller 1 
> epoch 1 received response {error_code=0} for a request sent to broker 
> : (id: 0 rack: null)}}
> This happens because {{ControllerChannelManager}} always instantiates a 
> {{Node}} using the same constructor whether or not rack-aware is configured. 
> We're happy to contribute a patch since this causes some confusion when 
> running with rack-aware replica placement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3358: KAFKA-5463: Controller incorrectly logs rack infor...

2017-06-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3358


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3365: MINOR: Unused logger removed.

2017-06-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3365


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---