[jira] [Assigned] (KAFKA-4291) TopicCommand --describe shows topics marked for deletion as under-replicated and unavailable (KIP-137)

2017-06-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-4291:
--

Assignee: Mickael Maison  (was: Ismael Juma)

> TopicCommand --describe shows topics marked for deletion as under-replicated 
> and unavailable (KIP-137)
> --
>
> Key: KAFKA-4291
> URL: https://issues.apache.org/jira/browse/KAFKA-4291
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.0.1
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> When using kafka-topics.sh --describe with --under-replicated-partitions or 
> --unavailable-partitions, topics marked for deletion are listed.
> While this is debatable whether we want to list such topics this way, it 
> should at least print that the topic is marked for deletion, like --list 
> does. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5355) Broker returns messages beyond "latest stable offset" to transactional consumer in read_committed mode

2017-06-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035841#comment-16035841
 ] 

Ismael Juma commented on KAFKA-5355:


We merged [~hachikuji]'s commit that should fix this. Leaving the JIRA open as 
Jason is still working on additional test cases.

> Broker returns messages beyond "latest stable offset" to transactional 
> consumer in read_committed mode
> --
>
> Key: KAFKA-5355
> URL: https://issues.apache.org/jira/browse/KAFKA-5355
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
> Attachments: test.log
>
>
> This issue is exposed by the new Streams EOS integration test.
> Streams has two tasks (ie, two producers with {{pid}} 0 and 2000) both 
> writing to output topic {{output}} with one partition (replication factor 1).
> The test uses an transactional consumer with {{group.id=readCommitted}} to 
> read the data from {{output}} topic. When it read the data, each producer has 
> committed 10 records (one producer write messages with {{key=0}} and the 
> other with {{key=1}}). Furthermore, each producer has an open transaction and 
> 5 uncommitted records written.
> The test fails, as we expect to see 10 records per key, but we get 15 for 
> key=1:
> {noformat}
> java.lang.AssertionError: 
> Expected: <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 6), 
> KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45)]>
>  but: was <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 
> 6), KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45), KeyValue(1, 55), KeyValue(1, 66), 
> KeyValue(1, 78), KeyValue(1, 91), KeyValue(1, 105)]>
> {noformat}
> Dumping the segment shows, that there are two commit markers (one for each 
> producer) for the first 10 messages written. Furthermore, there are 5 pending 
> records. Thus, "latest stable offset" should be 21 (20 messages plus 2 commit 
> markers) and not data should be returned beyond this offset.
> Dumped Log Segment {{output-0}}
> {noformat}
> Starting offset: 0
> baseOffset: 0 lastOffset: 9 baseSequence: 0 lastSequence: 9 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 0 
> CreateTime: 1496255947332 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 600535135
> baseOffset: 10 lastOffset: 10 baseSequence: -1 lastSequence: -1 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 291 
> CreateTime: 1496256005429 isvalid: true size: 78 magic: 2 compresscodec: NONE 
> crc: 3458060752
> baseOffset: 11 lastOffset: 20 baseSequence: 0 lastSequence: 9 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 369 CreateTime: 1496255947322 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 3392915713
> baseOffset: 21 lastOffset: 25 baseSequence: 10 lastSequence: 14 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 660 
> CreateTime: 1496255947342 isvalid: true size: 176 magic: 2 compresscodec: 
> NONE crc: 3513911368
> baseOffset: 26 lastOffset: 26 baseSequence: -1 lastSequence: -1 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 836 CreateTime: 1496256011784 isvalid: true size: 78 magic: 2 compresscodec: 
> NONE crc: 1619151485
> {noformat}
> Dump with {{--deep-iteration}}
> {noformat}
> Starting offset: 0
> offset: 0 position: 0 CreateTime: 1496255947323 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 0 
> headerKeys: [] key: 1 payload: 0
> offset: 1 position: 0 CreateTime: 1496255947324 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 1 
> headerKeys: [] key: 1 payload: 1
> offset: 2 position: 0 CreateTime: 1496255947325 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 2 
> headerKeys: [] key: 1 payload: 3
> offset: 3 position: 0 CreateTime: 1496255947326 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 3 
> headerKeys: [] key: 1 payload: 6
> offset: 4 position: 0 CreateTime: 1496255947327 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 4 
> headerKeys: [] key: 1 payload: 10
> offset: 5 position: 0 CreateTime: 1496255947328 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 

[jira] [Resolved] (KAFKA-5374) AdminClient gets "server returned information about unknown correlation ID" when communicating with older brokers

2017-06-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5374.

Resolution: Fixed

> AdminClient gets "server returned information about unknown correlation ID" 
> when communicating with older brokers
> -
>
> Key: KAFKA-5374
> URL: https://issues.apache.org/jira/browse/KAFKA-5374
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> AdminClient gets "server returned information about unknown correlation ID" 
> when communicating with older brokers.
> The messages look like this:
> {code}
>  [2017-06-02 22:58:29,774] ERROR Internal server error on -2: server returned 
> information about unknown correlation ID 5 
> (org.apache.kafka.clients.admin.KafkaAdminClient)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5019) Exactly-once upgrade notes

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035840#comment-16035840
 ] 

ASF GitHub Bot commented on KAFKA-5019:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3212


> Exactly-once upgrade notes
> --
>
> Key: KAFKA-5019
> URL: https://issues.apache.org/jira/browse/KAFKA-5019
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: documentation, exactly-once
> Fix For: 0.11.0.0
>
>
> We have added some basic upgrade notes, but we need to flesh them out. We 
> should cover every item that has compatibility implications as well new and 
> updated protocol APIs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5019) Exactly-once upgrade notes

2017-06-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5019:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3212
[https://github.com/apache/kafka/pull/3212]

> Exactly-once upgrade notes
> --
>
> Key: KAFKA-5019
> URL: https://issues.apache.org/jira/browse/KAFKA-5019
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: documentation, exactly-once
> Fix For: 0.11.0.0
>
>
> We have added some basic upgrade notes, but we need to flesh them out. We 
> should cover every item that has compatibility implications as well new and 
> updated protocol APIs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3212: KAFKA-5019: Upgrades notes for idempotent/transact...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3212


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5374) AdminClient gets "server returned information about unknown correlation ID" when communicating with older brokers

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035835#comment-16035835
 ] 

ASF GitHub Bot commented on KAFKA-5374:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3220


> AdminClient gets "server returned information about unknown correlation ID" 
> when communicating with older brokers
> -
>
> Key: KAFKA-5374
> URL: https://issues.apache.org/jira/browse/KAFKA-5374
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> AdminClient gets "server returned information about unknown correlation ID" 
> when communicating with older brokers.
> The messages look like this:
> {code}
>  [2017-06-02 22:58:29,774] ERROR Internal server error on -2: server returned 
> information about unknown correlation ID 5 
> (org.apache.kafka.clients.admin.KafkaAdminClient)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3220: KAFKA-5374: Set allow auto topic creation to false...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3220


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5355) Broker returns messages beyond "latest stable offset" to transactional consumer in read_committed mode

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035834#comment-16035834
 ] 

ASF GitHub Bot commented on KAFKA-5355:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3221


> Broker returns messages beyond "latest stable offset" to transactional 
> consumer in read_committed mode
> --
>
> Key: KAFKA-5355
> URL: https://issues.apache.org/jira/browse/KAFKA-5355
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
> Attachments: test.log
>
>
> This issue is exposed by the new Streams EOS integration test.
> Streams has two tasks (ie, two producers with {{pid}} 0 and 2000) both 
> writing to output topic {{output}} with one partition (replication factor 1).
> The test uses an transactional consumer with {{group.id=readCommitted}} to 
> read the data from {{output}} topic. When it read the data, each producer has 
> committed 10 records (one producer write messages with {{key=0}} and the 
> other with {{key=1}}). Furthermore, each producer has an open transaction and 
> 5 uncommitted records written.
> The test fails, as we expect to see 10 records per key, but we get 15 for 
> key=1:
> {noformat}
> java.lang.AssertionError: 
> Expected: <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 6), 
> KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45)]>
>  but: was <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 
> 6), KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45), KeyValue(1, 55), KeyValue(1, 66), 
> KeyValue(1, 78), KeyValue(1, 91), KeyValue(1, 105)]>
> {noformat}
> Dumping the segment shows, that there are two commit markers (one for each 
> producer) for the first 10 messages written. Furthermore, there are 5 pending 
> records. Thus, "latest stable offset" should be 21 (20 messages plus 2 commit 
> markers) and not data should be returned beyond this offset.
> Dumped Log Segment {{output-0}}
> {noformat}
> Starting offset: 0
> baseOffset: 0 lastOffset: 9 baseSequence: 0 lastSequence: 9 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 0 
> CreateTime: 1496255947332 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 600535135
> baseOffset: 10 lastOffset: 10 baseSequence: -1 lastSequence: -1 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 291 
> CreateTime: 1496256005429 isvalid: true size: 78 magic: 2 compresscodec: NONE 
> crc: 3458060752
> baseOffset: 11 lastOffset: 20 baseSequence: 0 lastSequence: 9 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 369 CreateTime: 1496255947322 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 3392915713
> baseOffset: 21 lastOffset: 25 baseSequence: 10 lastSequence: 14 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 660 
> CreateTime: 1496255947342 isvalid: true size: 176 magic: 2 compresscodec: 
> NONE crc: 3513911368
> baseOffset: 26 lastOffset: 26 baseSequence: -1 lastSequence: -1 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 836 CreateTime: 1496256011784 isvalid: true size: 78 magic: 2 compresscodec: 
> NONE crc: 1619151485
> {noformat}
> Dump with {{--deep-iteration}}
> {noformat}
> Starting offset: 0
> offset: 0 position: 0 CreateTime: 1496255947323 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 0 
> headerKeys: [] key: 1 payload: 0
> offset: 1 position: 0 CreateTime: 1496255947324 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 1 
> headerKeys: [] key: 1 payload: 1
> offset: 2 position: 0 CreateTime: 1496255947325 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 2 
> headerKeys: [] key: 1 payload: 3
> offset: 3 position: 0 CreateTime: 1496255947326 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 3 
> headerKeys: [] key: 1 payload: 6
> offset: 4 position: 0 CreateTime: 1496255947327 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 4 
> headerKeys: [] key: 1 payload: 10
> offset: 5 position: 0 CreateTime: 1496255947328 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 5 
> headerKeys: 

[GitHub] kafka pull request #3221: KAFKA-5355: DelayedFetch should propagate isolatio...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3221


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3222: MINOR: Code Cleanup

2017-06-02 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/3222

MINOR: Code Cleanup

Clean up includes
- Removing some seemingly unnecessary `@SuppressWarnings` annotations
- Resolving some Java warnings
- Closing unclosed Closable objects
- Removing unused code

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka minor/code_cleanup_1706

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3222.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3222


commit 622c6989c78ef3f4ef6fc0a7a0e05b84c77836a1
Author: Vahid Hashemian 
Date:   2017-06-03T03:08:46Z

MINOR: Code Cleanup

Clean up includes
- Removing some unused `@SuppressWarnings` annotations
- Resolved some Java warnings
- Closing unclosed Closable objects
- Removing unused code




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1647

2017-06-02 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Reuse decompression buffers in log cleaner

[ismael] KAFKA-5272; Policy for Alter Configs (KIP-133)

--
[...truncated 2.02 MB...]
org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testSingleOption STARTED

org.apache.kafka.common.security.JaasContextTest > testSingleOption PASSED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes STARTED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes PASSED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
STARTED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback STARTED

org.apache.kafka.common.security.JaasContextTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #1646

2017-06-02 Thread Apache Jenkins Server
See 




[jira] [Commented] (KAFKA-5355) Broker returns messages beyond "latest stable offset" to transactional consumer in read_committed mode

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035770#comment-16035770
 ] 

ASF GitHub Bot commented on KAFKA-5355:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3221

KAFKA-5355: DelayedFetch should propagate isolation level to log read



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5355

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3221.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3221


commit 82430eefb669718a35b93a34a2c8afec2230351b
Author: Jason Gustafson 
Date:   2017-06-03T02:26:39Z

KAFKA-5355: DelayedFetch should propagate isolation level to log read




> Broker returns messages beyond "latest stable offset" to transactional 
> consumer in read_committed mode
> --
>
> Key: KAFKA-5355
> URL: https://issues.apache.org/jira/browse/KAFKA-5355
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
> Attachments: test.log
>
>
> This issue is exposed by the new Streams EOS integration test.
> Streams has two tasks (ie, two producers with {{pid}} 0 and 2000) both 
> writing to output topic {{output}} with one partition (replication factor 1).
> The test uses an transactional consumer with {{group.id=readCommitted}} to 
> read the data from {{output}} topic. When it read the data, each producer has 
> committed 10 records (one producer write messages with {{key=0}} and the 
> other with {{key=1}}). Furthermore, each producer has an open transaction and 
> 5 uncommitted records written.
> The test fails, as we expect to see 10 records per key, but we get 15 for 
> key=1:
> {noformat}
> java.lang.AssertionError: 
> Expected: <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 6), 
> KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45)]>
>  but: was <[KeyValue(1, 0), KeyValue(1, 1), KeyValue(1, 3), KeyValue(1, 
> 6), KeyValue(1, 10), KeyValue(1, 15), KeyValue(1, 21), KeyValue(1, 28), 
> KeyValue(1, 36), KeyValue(1, 45), KeyValue(1, 55), KeyValue(1, 66), 
> KeyValue(1, 78), KeyValue(1, 91), KeyValue(1, 105)]>
> {noformat}
> Dumping the segment shows, that there are two commit markers (one for each 
> producer) for the first 10 messages written. Furthermore, there are 5 pending 
> records. Thus, "latest stable offset" should be 21 (20 messages plus 2 commit 
> markers) and not data should be returned beyond this offset.
> Dumped Log Segment {{output-0}}
> {noformat}
> Starting offset: 0
> baseOffset: 0 lastOffset: 9 baseSequence: 0 lastSequence: 9 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 0 
> CreateTime: 1496255947332 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 600535135
> baseOffset: 10 lastOffset: 10 baseSequence: -1 lastSequence: -1 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 291 
> CreateTime: 1496256005429 isvalid: true size: 78 magic: 2 compresscodec: NONE 
> crc: 3458060752
> baseOffset: 11 lastOffset: 20 baseSequence: 0 lastSequence: 9 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 369 CreateTime: 1496255947322 isvalid: true size: 291 magic: 2 compresscodec: 
> NONE crc: 3392915713
> baseOffset: 21 lastOffset: 25 baseSequence: 10 lastSequence: 14 producerId: 0 
> producerEpoch: 6 partitionLeaderEpoch: 0 isTransactional: true position: 660 
> CreateTime: 1496255947342 isvalid: true size: 176 magic: 2 compresscodec: 
> NONE crc: 3513911368
> baseOffset: 26 lastOffset: 26 baseSequence: -1 lastSequence: -1 producerId: 
> 2000 producerEpoch: 2 partitionLeaderEpoch: 0 isTransactional: true position: 
> 836 CreateTime: 1496256011784 isvalid: true size: 78 magic: 2 compresscodec: 
> NONE crc: 1619151485
> {noformat}
> Dump with {{--deep-iteration}}
> {noformat}
> Starting offset: 0
> offset: 0 position: 0 CreateTime: 1496255947323 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 0 
> headerKeys: [] key: 1 payload: 0
> offset: 1 position: 0 CreateTime: 1496255947324 isvalid: true keysize: 8 
> valuesize: 8 magic: 2 compresscodec: NONE crc: 600535135 sequence: 1 
> headerKeys: [] key: 1 payload: 1
> offset: 2 position: 

[GitHub] kafka pull request #3221: KAFKA-5355: DelayedFetch should propagate isolatio...

2017-06-02 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3221

KAFKA-5355: DelayedFetch should propagate isolation level to log read



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5355

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3221.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3221


commit 82430eefb669718a35b93a34a2c8afec2230351b
Author: Jason Gustafson 
Date:   2017-06-03T02:26:39Z

KAFKA-5355: DelayedFetch should propagate isolation level to log read




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5272) Improve validation for Describe/Alter Configs (KIP-133)

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035764#comment-16035764
 ] 

ASF GitHub Bot commented on KAFKA-5272:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3210


> Improve validation for Describe/Alter Configs (KIP-133)
> ---
>
> Key: KAFKA-5272
> URL: https://issues.apache.org/jira/browse/KAFKA-5272
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> TopicConfigHandler.processConfigChanges() warns about certain topic configs. 
> We should include such validations in alter configs and reject the change if 
> the validation fails. Note that this should be done without changing the 
> behaviour of the ConfigCommand (as it does not have access to the broker 
> configs).
> We should consider adding other validations like KAFKA-4092 and KAFKA-4680.
> Finally, the AlterConfigsPolicy mentioned in KIP-133 will be added at the 
> same time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3210: KAFKA-5272: Policy for Alter Configs (KIP-133)

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3210


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3220: KAFKA-5374: Set allow auto topic creation to false...

2017-06-02 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3220

KAFKA-5374: Set allow auto topic creation to false when requesting node 
information only

It avoids the need to handle protocol downgrades and it's safe (i.e. it 
will never cause
the auto creation of topics).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-5374-admin-client-metadata

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3220.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3220


commit 7b255c597f90a0823d7b2e594d6bb32dda04087d
Author: Colin P. Mccabe 
Date:   2017-06-03T00:01:02Z

KAFKA-5374. AdminClient gets "server returned information about unknown 
correlation ID" when communicating with older brokers

commit 29cde9a008efcdc1ea663983e3f5dbdd8944bed4
Author: Ismael Juma 
Date:   2017-06-03T01:31:46Z

Fix checkstyle and add comments




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5150) LZ4 decompression is 4-5x slower than Snappy on small batches / messages

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035737#comment-16035737
 ] 

ASF GitHub Bot commented on KAFKA-5150:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3180


> LZ4 decompression is 4-5x slower than Snappy on small batches / messages
> 
>
> Key: KAFKA-5150
> URL: https://issues.apache.org/jira/browse/KAFKA-5150
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.11.0.0, 0.10.2.1
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
> Fix For: 0.11.0.0, 0.10.2.2
>
>
> I benchmarked RecordsIteratorDeepRecordsIterator instantiation on small batch 
> sizes with small messages after observing some performance bottlenecks in the 
> consumer. 
> For batch sizes of 1 with messages of 100 bytes, LZ4 heavily underperforms 
> compared to Snappy (see benchmark below). Most of our time is currently spent 
> allocating memory blocks in KafkaLZ4BlockInputStream, due to the fact that we 
> default to larger 64kB block sizes. Some quick testing shows we could improve 
> performance by almost an order of magnitude for small batches and messages if 
> we re-used buffers between instantiations of the input stream.
> [Benchmark 
> Code|https://github.com/xvrl/kafka/blob/small-batch-lz4-benchmark/clients/src/test/java/org/apache/kafka/common/record/DeepRecordsIteratorBenchmark.java#L86]
> {code}
> Benchmark  (compressionType)  
> (messageSize)   Mode  Cnt   Score   Error  Units
> DeepRecordsIteratorBenchmark.measureSingleMessageLZ4  
>   100  thrpt   20   84802.279 ±  1983.847  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage SNAPPY  
>   100  thrpt   20  407585.747 ±  9877.073  ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage   NONE  
>   100  thrpt   20  579141.634 ± 18482.093  ops/s
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3180: MINOR: reuse decompression buffers in log cleaner

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3180


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5272) Improve validation for Describe/Alter Configs (KIP-133)

2017-06-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035728#comment-16035728
 ] 

Ismael Juma commented on KAFKA-5272:


I will merge AlterConfigPolicy soon. The rest of the work is not done yet.

> Improve validation for Describe/Alter Configs (KIP-133)
> ---
>
> Key: KAFKA-5272
> URL: https://issues.apache.org/jira/browse/KAFKA-5272
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> TopicConfigHandler.processConfigChanges() warns about certain topic configs. 
> We should include such validations in alter configs and reject the change if 
> the validation fails. Note that this should be done without changing the 
> behaviour of the ConfigCommand (as it does not have access to the broker 
> configs).
> We should consider adding other validations like KAFKA-4092 and KAFKA-4680.
> Finally, the AlterConfigsPolicy mentioned in KIP-133 will be added at the 
> same time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3218: KAFKA-5373 : Revert breaking change to console con...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3218


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4325) Improve processing of late records for window operations

2017-06-02 Thread Jeyhun Karimov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035688#comment-16035688
 ] 

Jeyhun Karimov commented on KAFKA-4325:
---

[~mjsax], would this jira requre KIP? 

> Improve processing of late records for window operations
> 
>
> Key: KAFKA-4325
> URL: https://issues.apache.org/jira/browse/KAFKA-4325
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Minor
>
> Windows are kept until their retention time passed. If a late arriving record 
> is processed that is older than any window kept, a new window is created 
> containing this single late arriving record, the aggregation is computed and 
> the window is immediately discarded afterward (as it is older than retention 
> time).
> This behavior might case problems for downstream application as the original 
> window aggregate might we overwritten with the late single-record- aggregate 
> value. Thus, we should rather not process the late arriving record for this 
> case.
> However, data loss might not be acceptable for all use cases. In order to 
> enable the use to not lose any data, window operators should allow to 
> register a handler function that is called instead of just dropping the late 
> arriving record.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5024) Old clients don't support message format V2

2017-06-02 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5024.

Resolution: Won't Fix

I added a note on the KIP. Closing this.

> Old clients don't support message format V2
> ---
>
> Key: KAFKA-5024
> URL: https://issues.apache.org/jira/browse/KAFKA-5024
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>  Labels: documentation, exactly-once
> Fix For: 0.11.0.0
>
>
> Is this OK? If so, we can close this JIRA, but we should make that decision 
> consciously.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5374) AdminClient gets "server returned information about unknown correlation ID" when communicating with older brokers

2017-06-02 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035664#comment-16035664
 ] 

Colin P. McCabe commented on KAFKA-5374:


This was introduced by KAFKA-5291.  The issue here is that the internal 
metadata requests being made by AdminClient were using 
{{allowAutoCreateTopics=false}}, which did not work with brokers older than 
0.11.

Unfortunately, NetworkClient doesn't handle this failure case well, and the 
failed internal metadata request responses were being returned back from the 
{{NetworkClient#poll}} call.  This caused the puzzling error message, since 
they had not been explicitly made by the {{AdminClient}}.

Since we are just making MetadataRequests for zero topics, there is no point in 
setting allowAutoCreateTopics=false anyway.  So I just set it to true in this 
patch.  I added a system test to the patch that will test a cross-version 
AdminClient call.  In the future, we should probably look into handling 
UnsupportedVersionException and other errors better in the internal metadata 
request code, but that's a bigger patch.

> AdminClient gets "server returned information about unknown correlation ID" 
> when communicating with older brokers
> -
>
> Key: KAFKA-5374
> URL: https://issues.apache.org/jira/browse/KAFKA-5374
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> AdminClient gets "server returned information about unknown correlation ID" 
> when communicating with older brokers.
> The messages look like this:
> {code}
>  [2017-06-02 22:58:29,774] ERROR Internal server error on -2: server returned 
> information about unknown correlation ID 5 
> (org.apache.kafka.clients.admin.KafkaAdminClient)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3219: KAFKA-5374. AdminClient gets "server returned info...

2017-06-02 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3219

KAFKA-5374. AdminClient gets "server returned information about unkno…

…wn correlation ID" when communicating with older brokers

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5374

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3219.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3219


commit 7b255c597f90a0823d7b2e594d6bb32dda04087d
Author: Colin P. Mccabe 
Date:   2017-06-03T00:01:02Z

KAFKA-5374. AdminClient gets "server returned information about unknown 
correlation ID" when communicating with older brokers




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-0.11.0-jdk7 #86

2017-06-02 Thread Apache Jenkins Server
See 




[jira] [Commented] (KAFKA-5373) ConsoleConsumer prints out object addresses rather than what is expected

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035627#comment-16035627
 ] 

ASF GitHub Bot commented on KAFKA-5373:
---

GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3218

KAFKA-5373 : Revert breaking change to console consumer 

This patch b63e41ea78a58bdea78be33f90bfcb61ce5988d3 since it broke the 
console consumer -- the consumer prints the addresses of the messages
instead of the contents with that patch.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka KAFKA-5373-fix-console-consumer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3218.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3218


commit 245d0d83ddf1d190fefe24794182be29b0b0aaa8
Author: Apurva Mehta 
Date:   2017-06-02T23:33:54Z

Revert b63e41ea78a58bdea78be33f90bfcb61ce5988d3 since it broke the
console consumer -- the consumer prints the addresses of the messages
instead of the contents with that patch.




> ConsoleConsumer prints out object addresses rather than what is expected
> 
>
> Key: KAFKA-5373
> URL: https://issues.apache.org/jira/browse/KAFKA-5373
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Apurva Mehta
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> The ConsoleConsumer prints out object addresses rather than what is expected.
> {code}
> root@2e98af0fa6d6:/# /opt/kafka-dev/bin/kafka-console-consumer.sh 
> --bootstrap-server 'knode09:9092,knode04:9092,knode12:9092' --topic foo   
> [6/85]
> [B@687e99d8
> [B@c86b9e3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3218: KAFKA-5373 : Revert breaking change to console con...

2017-06-02 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3218

KAFKA-5373 : Revert breaking change to console consumer 

This patch b63e41ea78a58bdea78be33f90bfcb61ce5988d3 since it broke the 
console consumer -- the consumer prints the addresses of the messages
instead of the contents with that patch.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka KAFKA-5373-fix-console-consumer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3218.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3218


commit 245d0d83ddf1d190fefe24794182be29b0b0aaa8
Author: Apurva Mehta 
Date:   2017-06-02T23:33:54Z

Revert b63e41ea78a58bdea78be33f90bfcb61ce5988d3 since it broke the
console consumer -- the consumer prints the addresses of the messages
instead of the contents with that patch.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5374) AdminClient gets "server returned information about unknown correlation ID" when communicating with older brokers

2017-06-02 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5374:
--

 Summary: AdminClient gets "server returned information about 
unknown correlation ID" when communicating with older brokers
 Key: KAFKA-5374
 URL: https://issues.apache.org/jira/browse/KAFKA-5374
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 0.11.0.0
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe
Priority: Blocker
 Fix For: 0.11.0.0


AdminClient gets "server returned information about unknown correlation ID" 
when communicating with older brokers.

The messages look like this:
{code}
 [2017-06-02 22:58:29,774] ERROR Internal server error on -2: server returned 
information about unknown correlation ID 5 
(org.apache.kafka.clients.admin.KafkaAdminClient)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5373) ConsoleConsumer prints out object addresses rather than what is expected

2017-06-02 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5373:
--

 Summary: ConsoleConsumer prints out object addresses rather than 
what is expected
 Key: KAFKA-5373
 URL: https://issues.apache.org/jira/browse/KAFKA-5373
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Colin P. McCabe
Assignee: Apurva Mehta
Priority: Blocker
 Fix For: 0.11.0.0


The ConsoleConsumer prints out object addresses rather than what is expected.

{code}
root@2e98af0fa6d6:/# /opt/kafka-dev/bin/kafka-console-consumer.sh 
--bootstrap-server 'knode09:9092,knode04:9092,knode12:9092' --topic foo 
  [6/85]
[B@687e99d8
[B@c86b9e3
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5070) org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the state directory: /opt/rocksdb/pulse10/0_18

2017-06-02 Thread Kevin Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035590#comment-16035590
 ] 

Kevin Chen edited comment on KAFKA-5070 at 6/2/17 11:03 PM:


Hi, Matthias:

my state store was not corrupted. I had tried to  delete my local state dir. 
But not every time it worked.

I think I found out the root cause, it is related to timing. Even I did not 
delete my local state dir, I just need increase my poll time, then it started 
ok. then I decrease it, it failed

During fresh start, the application has been assigned 12 tasks, some of the 
tasks initialized and started to processing data, while the other are still 
being initialized. Since I am using in-memory state store, it will pull data 
from broker so it may starve those tasks that have been initialized. If those 
tasks cannot finish their process within  the poll time, it will trigger 
re-balance.  it will try to start a new one while old one is still running.  
also, I do not think the lock was released properly when this happen because 
when I debug into your library, I saw it throw OverlappingFileLockException.  
according to Oracle, this means:

   Unchecked exception thrown when an attempt is made to acquire a lock on a 
region of a file that overlaps a region already locked by the same Java virtual 
machine, or when another thread is already waiting to lock an overlapping 
region of the same file.

Let me know if it need more information from me to fix the issue.

thanks,
Kevin




was (Author: kchen):
Hi, Matthias:

my state store was not corrupted. I had tried to  delete my local state dir. 
But not every time it worked.

I think I found out the root cause, it is related to timing. Even I did not 
delete my local state dir, I just need increase my poll time, then it started 
ok.

During fresh start, the application has been assigned 12 tasks, some of the 
tasks initialized and started to processing data, while the other are still 
being initialized. Since I am using in-memory state store, it will pull data 
from broker so it may starve those tasks that have been initialized. If those 
tasks cannot finish their process within  the poll time, it will trigger 
re-balance.  it will try to start a new one while old one is still running.  
also, I do not think the lock was released properly when this happen because 
when I debug into your library, I saw it throw OverlappingFileLockException.  
according to Oracle, this means:

   Unchecked exception thrown when an attempt is made to acquire a lock on a 
region of a file that overlaps a region already locked by the same Java virtual 
machine, or when another thread is already waiting to lock an overlapping 
region of the same file.

Let me know if it need more information from me to fix the issue.

thanks,
Kevin



> org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the 
> state directory: /opt/rocksdb/pulse10/0_18
> 
>
> Key: KAFKA-5070
> URL: https://issues.apache.org/jira/browse/KAFKA-5070
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Linux Version
>Reporter: Dhana
>Assignee: Matthias J. Sax
> Attachments: RocksDB_LockStateDirec.7z
>
>
> Notes: we run two instance of consumer in two difference machines/nodes.
> we have 400 partitions. 200  stream threads/consumer, with 2 consumer.
> We perform HA test(on rebalance - shutdown of one of the consumer/broker), we 
> see this happening
> Error:
> 2017-04-05 11:36:09.352 WARN  StreamThread:1184 StreamThread-66 - Could not 
> create task 0_115. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_115] Failed to lock 
> the state directory: /opt/rocksdb/pulse10/0_115
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:102)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
>   at 
> 

[jira] [Commented] (KAFKA-5070) org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the state directory: /opt/rocksdb/pulse10/0_18

2017-06-02 Thread Kevin Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035590#comment-16035590
 ] 

Kevin Chen commented on KAFKA-5070:
---

Hi, Matthias:

my state store was not corrupted. I had tried to  delete my local state dir. 
But not every time it worked.

I think I found out the root cause, it is related to timing. Even I did not 
delete my local state dir, I just need increase my poll time, then it started 
ok.

During fresh start, the application has been assigned 12 tasks, some of the 
tasks initialized and started to processing data, while the other are still 
being initialized. Since I am using in-memory state store, it will pull data 
from broker so it may starve those tasks that have been initialized. If those 
tasks cannot finish their process within  the poll time, it will trigger 
re-balance.  it will try to start a new one while old one is still running.  
also, I do not think the lock was released properly when this happen because 
when I debug into your library, I saw it throw OverlappingFileLockException.  
according to Oracle, this means:

   Unchecked exception thrown when an attempt is made to acquire a lock on a 
region of a file that overlaps a region already locked by the same Java virtual 
machine, or when another thread is already waiting to lock an overlapping 
region of the same file.

Let me know if it need more information from me to fix the issue.

thanks,
Kevin



> org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the 
> state directory: /opt/rocksdb/pulse10/0_18
> 
>
> Key: KAFKA-5070
> URL: https://issues.apache.org/jira/browse/KAFKA-5070
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Linux Version
>Reporter: Dhana
>Assignee: Matthias J. Sax
> Attachments: RocksDB_LockStateDirec.7z
>
>
> Notes: we run two instance of consumer in two difference machines/nodes.
> we have 400 partitions. 200  stream threads/consumer, with 2 consumer.
> We perform HA test(on rebalance - shutdown of one of the consumer/broker), we 
> see this happening
> Error:
> 2017-04-05 11:36:09.352 WARN  StreamThread:1184 StreamThread-66 - Could not 
> create task 0_115. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_115] Failed to lock 
> the state directory: /opt/rocksdb/pulse10/0_115
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:102)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-06-02 Thread Jason Gustafson
Thanks. +1

On Thu, Jun 1, 2017 at 9:40 PM, Matthias J. Sax 
wrote:

> +1
>
> Thanks for updating the KIP!
>
> -Matthias
>
> On 6/1/17 6:18 PM, Bill Bejeck wrote:
> > +1
> >
> > Thanks,
> > Bill
> >
> > On Thu, Jun 1, 2017 at 7:45 PM, Guozhang Wang 
> wrote:
> >
> >> +1 again. Thanks.
> >>
> >> On Tue, May 30, 2017 at 1:46 PM, BigData dev 
> >> wrote:
> >>
> >>> Hi All,
> >>> Updated the KIP, as the consumer configurations are required for both
> >> Admin
> >>> Client and Consumer in Stream reset tool. Updated the KIP to use
> >>> command-config option, similar to other tools like
> >> kafka-consumer-groups.sh
> >>>
> >>>
> >>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>> 157+-+Add+consumer+config+options+to+streams+reset+tool
> >>>  >>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
> >>>
> >>>
> >>> So, starting the voting process again for further inputs.
> >>>
> >>> This vote will run for a minimum of 72 hours.
> >>>
> >>> Thanks,
> >>>
> >>> Bharat
> >>>
> >>>
> >>>
> >>> On Tue, May 30, 2017 at 1:18 PM, Guozhang Wang 
> >> wrote:
> >>>
>  +1. Thanks!
> 
>  On Tue, May 16, 2017 at 1:12 AM, Eno Thereska  >
>  wrote:
> 
> > +1 thanks.
> >
> > Eno
> >> On 16 May 2017, at 04:20, BigData dev 
> >>> wrote:
> >>
> >> Hi All,
> >> Given the simple and non-controversial nature of the KIP, I would
> >>> like
>  to
> >> start the voting process for KIP-157: Add consumer config options
> >> to
> >> streams reset tool
> >>
> >> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-
> > +Add+consumer+config+options+to+streams+reset+tool
> >>  > +Add+consumer+config+options+to+streams+reset+tool>*
> >>
> >>
> >> The vote will run for a minimum of 72 hours.
> >>
> >> Thanks,
> >>
> >> Bharat
> >
> >
> 
> 
>  --
>  -- Guozhang
> 
> >>>
> >>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
>
>


[jira] [Updated] (KAFKA-5366) Add cases for concurrent transactional reads and writes in system tests

2017-06-02 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5366:

Fix Version/s: (was: 0.11.1.0)
   0.11.0.1

> Add cases for concurrent transactional reads and writes in system tests
> ---
>
> Key: KAFKA-5366
> URL: https://issues.apache.org/jira/browse/KAFKA-5366
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>  Labels: exactly-once
> Fix For: 0.11.0.1
>
>
> Currently the transactions system test does a transactional copy while 
> bouncing brokers and clients, and then does a verifying read on the output 
> topic to ensure that it exactly matches the input. 
> We should also have a transactional consumer reading the tail of the output 
> topic as the writes are happening, and then assert that the values _it_ reads 
> also exactly match the values in the source topics. 
> This test really exercises the abort index, and we don't have any of them in 
> the system or integration tests right now. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5366) Add cases for concurrent transactional reads and writes in system tests

2017-06-02 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5366:

Status: Patch Available  (was: Open)

> Add cases for concurrent transactional reads and writes in system tests
> ---
>
> Key: KAFKA-5366
> URL: https://issues.apache.org/jira/browse/KAFKA-5366
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>  Labels: exactly-once
> Fix For: 0.11.1.0
>
>
> Currently the transactions system test does a transactional copy while 
> bouncing brokers and clients, and then does a verifying read on the output 
> topic to ensure that it exactly matches the input. 
> We should also have a transactional consumer reading the tail of the output 
> topic as the writes are happening, and then assert that the values _it_ reads 
> also exactly match the values in the source topics. 
> This test really exercises the abort index, and we don't have any of them in 
> the system or integration tests right now. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5372) Unexpected state transition Dead to PendingShutdown

2017-06-02 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5372:
--

 Summary: Unexpected state transition Dead to PendingShutdown
 Key: KAFKA-5372
 URL: https://issues.apache.org/jira/browse/KAFKA-5372
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


I often see this running integration tests:
{code}
[2017-06-02 15:36:03,411] WARN stream-thread 
[appId-1-c382ef0a-adbd-422b-9717-9b2bc52b55eb-StreamThread-13] Unexpected state 
transition from DEAD to PENDING_SHUTDOWN. 
(org.apache.kafka.streams.processor.internals.StreamThread:976)
{code}
Maybe a race condition on shutdown or something?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5366) Add cases for concurrent transactional reads and writes in system tests

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035542#comment-16035542
 ] 

ASF GitHub Bot commented on KAFKA-5366:
---

GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3217

KAFKA-5366: Add concurrent reads to transactions system test

This currently fails in multiple ways. One of which is most likely 
KAFKA-5355, where the concurrent consumer reads duplicates.

During broker bounces, the concurrent consumer misses messages completely. 
This is another bug.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-5366-add-concurrent-reads-to-transactions-system-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3217.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3217


commit cd0990784aaa26fa6485e9f369a600b85c1647f9
Author: Apurva Mehta 
Date:   2017-06-02T22:25:00Z

Add a concurrent consumer in the transactions system tests. This will 
exercise the abort index

commit 71fcad197b403ff7873d646feec287d92793cbe6
Author: Apurva Mehta 
Date:   2017-06-02T22:29:47Z

Bounce brokers as well




> Add cases for concurrent transactional reads and writes in system tests
> ---
>
> Key: KAFKA-5366
> URL: https://issues.apache.org/jira/browse/KAFKA-5366
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>  Labels: exactly-once
> Fix For: 0.11.1.0
>
>
> Currently the transactions system test does a transactional copy while 
> bouncing brokers and clients, and then does a verifying read on the output 
> topic to ensure that it exactly matches the input. 
> We should also have a transactional consumer reading the tail of the output 
> topic as the writes are happening, and then assert that the values _it_ reads 
> also exactly match the values in the source topics. 
> This test really exercises the abort index, and we don't have any of them in 
> the system or integration tests right now. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3217: KAFKA-5366: Add concurrent reads to transactions s...

2017-06-02 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3217

KAFKA-5366: Add concurrent reads to transactions system test

This currently fails in multiple ways. One of which is most likely 
KAFKA-5355, where the concurrent consumer reads duplicates.

During broker bounces, the concurrent consumer misses messages completely. 
This is another bug.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-5366-add-concurrent-reads-to-transactions-system-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3217.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3217


commit cd0990784aaa26fa6485e9f369a600b85c1647f9
Author: Apurva Mehta 
Date:   2017-06-02T22:25:00Z

Add a concurrent consumer in the transactions system tests. This will 
exercise the abort index

commit 71fcad197b403ff7873d646feec287d92793cbe6
Author: Apurva Mehta 
Date:   2017-06-02T22:29:47Z

Bounce brokers as well




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3216: HOTFIX: Reinstate the placeholder for logPrefix in...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3216


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2330

2017-06-02 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-3982: Fix processing order of some of the consumer properties

[ismael] KAFKA-5236; Increase the block/buffer size when compressing with Snappy

--
[...truncated 952.04 KB...]

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > 
shouldOverwriteConfigFromConfigFileOrPropertiesWithConfigFromArguments STARTED

kafka.tools.ConsoleConsumerTest > 
shouldOverwriteConfigFromConfigFileOrPropertiesWithConfigFromArguments PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady STARTED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady PASSED

kafka.common.InterBrokerSendThreadTest > 
shouldCallCompletionHandlerWithDisconnectedResponseWhenNodeNotReady STARTED

kafka.common.InterBrokerSendThreadTest > 
shouldCallCompletionHandlerWithDisconnectedResponseWhenNodeNotReady PASSED

kafka.common.InterBrokerSendThreadTest > shouldNotSendAnythingWhenNoRequests 
STARTED

kafka.common.InterBrokerSendThreadTest > shouldNotSendAnythingWhenNoRequests 
PASSED

kafka.common.ConfigTest > testInvalidGroupIds STARTED

kafka.common.ConfigTest 

[GitHub] kafka pull request #3216: HOTFIX: Reinstate the placeholder for logPrefix in...

2017-06-02 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3216

HOTFIX: Reinstate the placeholder for logPrefix in TransactionManager



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
HOTFIX-logging-bug-in-transaction-manager

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3216.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3216


commit 975c6c673abfef9e70ca17c1b0a6f1efe4ada927
Author: Apurva Mehta 
Date:   2017-06-02T22:26:18Z

Reinstate the placeholder for log prefix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3215: Adding deprecated KTable methods to docs from KIP-...

2017-06-02 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/3215

Adding deprecated KTable methods to docs from KIP-114



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka kip114-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3215.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3215


commit bb1012c6db9ac5f2eaea608cc66758e81b99b21c
Author: Bill Bejeck 
Date:   2017-06-02T22:17:12Z

Adding deprecated KTable methods to docs from KIP-114




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5070) org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the state directory: /opt/rocksdb/pulse10/0_18

2017-06-02 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035508#comment-16035508
 ] 

Matthias J. Sax commented on KAFKA-5070:


If you hit this on a fresh restart, I guess your state directory is 
"corrupted". For this case, you need to delete the state directory so it can be 
recreated (either manually, or by using {{KafkaStreams#claenup()}}).

Btw: it's usually better to ask question like this on the mailing list (cf 
http://kafka.apache.org/contact). Anyway: yes you can share a store between 
processors, by attaching it to both.

> org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the 
> state directory: /opt/rocksdb/pulse10/0_18
> 
>
> Key: KAFKA-5070
> URL: https://issues.apache.org/jira/browse/KAFKA-5070
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Linux Version
>Reporter: Dhana
>Assignee: Matthias J. Sax
> Attachments: RocksDB_LockStateDirec.7z
>
>
> Notes: we run two instance of consumer in two difference machines/nodes.
> we have 400 partitions. 200  stream threads/consumer, with 2 consumer.
> We perform HA test(on rebalance - shutdown of one of the consumer/broker), we 
> see this happening
> Error:
> 2017-04-05 11:36:09.352 WARN  StreamThread:1184 StreamThread-66 - Could not 
> create task 0_115. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_115] Failed to lock 
> the state directory: /opt/rocksdb/pulse10/0_115
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:102)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5070) org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the state directory: /opt/rocksdb/pulse10/0_18

2017-06-02 Thread Kevin Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035501#comment-16035501
 ] 

Kevin Chen commented on KAFKA-5070:
---

A little background about our topology: the source node will feed to a 
processor who will send its result to a sink node and second processor. the 
second processor will only connect to a different sink node.

both processor use the same java class, but they have different state store to 
work with. 

another question, is it possible to share the same state store between the 2 
processors?

thanks,
Kevin

> org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the 
> state directory: /opt/rocksdb/pulse10/0_18
> 
>
> Key: KAFKA-5070
> URL: https://issues.apache.org/jira/browse/KAFKA-5070
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Linux Version
>Reporter: Dhana
>Assignee: Matthias J. Sax
> Attachments: RocksDB_LockStateDirec.7z
>
>
> Notes: we run two instance of consumer in two difference machines/nodes.
> we have 400 partitions. 200  stream threads/consumer, with 2 consumer.
> We perform HA test(on rebalance - shutdown of one of the consumer/broker), we 
> see this happening
> Error:
> 2017-04-05 11:36:09.352 WARN  StreamThread:1184 StreamThread-66 - Could not 
> create task 0_115. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_115] Failed to lock 
> the state directory: /opt/rocksdb/pulse10/0_115
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:102)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3214: MINOR: syntax brush for java / bash / json / text

2017-06-02 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3214

MINOR: syntax brush for java / bash / json / text



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KMinor-doc-java-brush

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3214.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3214


commit 1d874182983d96f1cb330ece858de37cf70638bd
Author: Guozhang Wang 
Date:   2017-06-02T21:50:12Z

brush text for java / bash / json




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5371) SyncProducerTest.testReachableServer has become flaky

2017-06-02 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5371:

Description: 
This test has started failing recently on jenkins with the following 

{noformat}
org.scalatest.junit.JUnitTestFailedError: Unexpected failure sending message to 
broker. null
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
at 
org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
at org.scalatest.Assertions$class.fail(Assertions.scala:1089)
at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71)
at 
kafka.producer.SyncProducerTest.testReachableServer(SyncProducerTest.scala:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:147)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:129)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:46)
at 

[jira] [Assigned] (KAFKA-5371) SyncProducerTest.testReachableServer has become flaky

2017-06-02 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta reassigned KAFKA-5371:
---

Assignee: Apurva Mehta

> SyncProducerTest.testReachableServer has become flaky
> -
>
> Key: KAFKA-5371
> URL: https://issues.apache.org/jira/browse/KAFKA-5371
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>
> This test has started failing recently on jenkins with the following 
> {noformat}
> org.scalatest.junit.JUnitTestFailedError: Unexpected failure sending message 
> to broker. null
> Stacktrace
> org.scalatest.junit.JUnitTestFailedError: Unexpected failure sending message 
> to broker. null
>   at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
>   at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
>   at org.scalatest.Assertions$class.fail(Assertions.scala:1089)
>   at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71)
>   at 
> kafka.producer.SyncProducerTest.testReachableServer(SyncProducerTest.scala:71)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Created] (KAFKA-5371) SyncProducerTest.testReachableServer has become flaky

2017-06-02 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5371:
---

 Summary: SyncProducerTest.testReachableServer has become flaky
 Key: KAFKA-5371
 URL: https://issues.apache.org/jira/browse/KAFKA-5371
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Apurva Mehta


This test has started failing recently on jenkins with the following 

{noformat}
org.scalatest.junit.JUnitTestFailedError: Unexpected failure sending message to 
broker. null
Stacktrace

org.scalatest.junit.JUnitTestFailedError: Unexpected failure sending message to 
broker. null
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
at 
org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
at org.scalatest.Assertions$class.fail(Assertions.scala:1089)
at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71)
at 
kafka.producer.SyncProducerTest.testReachableServer(SyncProducerTest.scala:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:147)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:129)
at 

[jira] [Updated] (KAFKA-4595) Controller send thread can't stop when broker change listener event trigger for dead brokers

2017-06-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4595:
---
Fix Version/s: (was: 0.11.0.1)
   0.11.0.0

> Controller send thread can't stop when broker change listener event trigger 
> for  dead brokers
> -
>
> Key: KAFKA-4595
> URL: https://issues.apache.org/jira/browse/KAFKA-4595
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.10.1.1
>Reporter: Pengwei
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.0
>
>
> In our test env, we found controller is not working after a delete topic 
> opertation and network issue, the stack is below:
> "ZkClient-EventThread-15-192.168.1.3:2184,192.168.1.4:2184,192.168.1.5:2184" 
> #15 daemon prio=5 os_prio=0 tid=0x7fb76416e000 nid=0x3019 waiting on 
> condition [0x7fb76b7c8000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc05497b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> kafka.utils.ShutdownableThread.awaitShutdown(ShutdownableThread.scala:50)
>   at kafka.utils.ShutdownableThread.shutdown(ShutdownableThread.scala:32)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$removeExistingBroker(ControllerChannelManager.scala:128)
>   at 
> kafka.controller.ControllerChannelManager.removeBroker(ControllerChannelManager.scala:81)
>   - locked <0xc0258760> (a java.lang.Object)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply$mcVI$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
>Locked ownable synchronizers:
>   - <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> "Controller-1001-to-broker-1003-send-thread" #88 prio=5 os_prio=0 
> tid=0x7fb778342000 nid=0x5a4c waiting on condition [0x7fb761de]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> 

[jira] [Resolved] (KAFKA-4595) Controller send thread can't stop when broker change listener event trigger for dead brokers

2017-06-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4595.

Resolution: Fixed
  Assignee: Onur Karaman

> Controller send thread can't stop when broker change listener event trigger 
> for  dead brokers
> -
>
> Key: KAFKA-4595
> URL: https://issues.apache.org/jira/browse/KAFKA-4595
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.10.1.1
>Reporter: Pengwei
>Assignee: Onur Karaman
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.0
>
>
> In our test env, we found controller is not working after a delete topic 
> opertation and network issue, the stack is below:
> "ZkClient-EventThread-15-192.168.1.3:2184,192.168.1.4:2184,192.168.1.5:2184" 
> #15 daemon prio=5 os_prio=0 tid=0x7fb76416e000 nid=0x3019 waiting on 
> condition [0x7fb76b7c8000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc05497b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> kafka.utils.ShutdownableThread.awaitShutdown(ShutdownableThread.scala:50)
>   at kafka.utils.ShutdownableThread.shutdown(ShutdownableThread.scala:32)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$removeExistingBroker(ControllerChannelManager.scala:128)
>   at 
> kafka.controller.ControllerChannelManager.removeBroker(ControllerChannelManager.scala:81)
>   - locked <0xc0258760> (a java.lang.Object)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply$mcVI$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
>Locked ownable synchronizers:
>   - <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> "Controller-1001-to-broker-1003-send-thread" #88 prio=5 os_prio=0 
> tid=0x7fb778342000 nid=0x5a4c waiting on condition [0x7fb761de]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> 

[jira] [Commented] (KAFKA-5070) org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the state directory: /opt/rocksdb/pulse10/0_18

2017-06-02 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035471#comment-16035471
 ] 

Matthias J. Sax commented on KAFKA-5070:


[~kchen] At a regular rebalance, this exception is expected. That's why we log 
it at {{WARN}} level -- the stack trace is a little miss leading -- it's not a 
real issues and we removed the stack trace for upcoming {{0.11}} release. So as 
long as the exception does resolve itself, note the {{"Will retry."}} clause, 
it's all fine. It's only a problem is the exception does not resolve itself and 
get stuck retrying.

Thus, the {{rocskdb.flush()}} exception seems to be a real problem. Are you 
using {{0.10.2.0}} or {{0.10.2.1}}? We fixed a couple of issues for regard to 
state dir locking in {{0.10.2.1}} so I would highly recommend to upgrade to the 
bug fix release. More fixes are coming in {{0.11}} -- the target release date 
is mid June. If you are already on {{0.10.2.1}} can you maybe share some logs 
or stacktrace for the {{rocskdb.flush()}} exception?

Hope this helps.

> org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the 
> state directory: /opt/rocksdb/pulse10/0_18
> 
>
> Key: KAFKA-5070
> URL: https://issues.apache.org/jira/browse/KAFKA-5070
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Linux Version
>Reporter: Dhana
>Assignee: Matthias J. Sax
> Attachments: RocksDB_LockStateDirec.7z
>
>
> Notes: we run two instance of consumer in two difference machines/nodes.
> we have 400 partitions. 200  stream threads/consumer, with 2 consumer.
> We perform HA test(on rebalance - shutdown of one of the consumer/broker), we 
> see this happening
> Error:
> 2017-04-05 11:36:09.352 WARN  StreamThread:1184 StreamThread-66 - Could not 
> create task 0_115. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_115] Failed to lock 
> the state directory: /opt/rocksdb/pulse10/0_115
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:102)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4595) Controller send thread can't stop when broker change listener event trigger for dead brokers

2017-06-02 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035465#comment-16035465
 ] 

Onur Karaman commented on KAFKA-4595:
-

[~ijuma] Yeah I think KAFKA-5028 solved this issue.

> Controller send thread can't stop when broker change listener event trigger 
> for  dead brokers
> -
>
> Key: KAFKA-4595
> URL: https://issues.apache.org/jira/browse/KAFKA-4595
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.10.1.1
>Reporter: Pengwei
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.1
>
>
> In our test env, we found controller is not working after a delete topic 
> opertation and network issue, the stack is below:
> "ZkClient-EventThread-15-192.168.1.3:2184,192.168.1.4:2184,192.168.1.5:2184" 
> #15 daemon prio=5 os_prio=0 tid=0x7fb76416e000 nid=0x3019 waiting on 
> condition [0x7fb76b7c8000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc05497b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> kafka.utils.ShutdownableThread.awaitShutdown(ShutdownableThread.scala:50)
>   at kafka.utils.ShutdownableThread.shutdown(ShutdownableThread.scala:32)
>   at 
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$removeExistingBroker(ControllerChannelManager.scala:128)
>   at 
> kafka.controller.ControllerChannelManager.removeBroker(ControllerChannelManager.scala:81)
>   - locked <0xc0258760> (a java.lang.Object)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply$mcVI$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(ReplicaStateMachine.scala:369)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:369)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
>Locked ownable synchronizers:
>   - <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> "Controller-1001-to-broker-1003-send-thread" #88 prio=5 os_prio=0 
> tid=0x7fb778342000 nid=0x5a4c waiting on condition [0x7fb761de]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc02587f8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> 

[jira] [Commented] (KAFKA-5070) org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the state directory: /opt/rocksdb/pulse10/0_18

2017-06-02 Thread Kevin Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035452#comment-16035452
 ] 

Kevin Chen commented on KAFKA-5070:
---

we saw the similar exceptions in our stream applications too, we are using 10.2 
client/server. We have 6 threads per instance. we have 12 partitions on the 
incoming topic. so total 12 tasks, we are running total 2 instances on 2 
nodes(one instance each). 

 We got the following exceptions in different situations
   ! org.apache.kafka.streams.errors.LockException: task [0_1] Failed to lock 
the state directory: 

at one time, the root cause is rocskdb.flush() exception(we are not using 
custom implementation), restart fix the problem. only saw that once.

But most time it happens when to re-balance, like we need bring down a node. In 
this case, no apparent root cause, here is all I got from the stack trace
WARN  [2017-06-02 21:23:47,591] 
org.apache.kafka.streams.processor.internals.StreamThread: Could not create 
task 0_6. Will retry.
! org.apache.kafka.streams.errors.LockException: task [0_6] Failed to lock the 
state directory: /tmp/aggregator-one/kafka-streams-state/aggregator-one-1/0_6
! at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:102)
! at 
org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
! at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
! at 
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
! at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
! at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
! at 
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
! at 
org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
! at 
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
! at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
! at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
! at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
! at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
! at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
! at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
! at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
! at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)

> org.apache.kafka.streams.errors.LockException: task [0_18] Failed to lock the 
> state directory: /opt/rocksdb/pulse10/0_18
> 
>
> Key: KAFKA-5070
> URL: https://issues.apache.org/jira/browse/KAFKA-5070
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Linux Version
>Reporter: Dhana
>Assignee: Matthias J. Sax
> Attachments: RocksDB_LockStateDirec.7z
>
>
> Notes: we run two instance of consumer in two difference machines/nodes.
> we have 400 partitions. 200  stream threads/consumer, with 2 consumer.
> We perform HA test(on rebalance - shutdown of one of the consumer/broker), we 
> see this happening
> Error:
> 2017-04-05 11:36:09.352 WARN  StreamThread:1184 StreamThread-66 - Could not 
> create task 0_115. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_115] Failed to lock 
> the state directory: /opt/rocksdb/pulse10/0_115
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:102)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
> 

[jira] [Commented] (KAFKA-5322) Resolve AddPartitions response error code inconsistency

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035447#comment-16035447
 ] 

ASF GitHub Bot commented on KAFKA-5322:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3204


> Resolve AddPartitions response error code inconsistency
> ---
>
> Key: KAFKA-5322
> URL: https://issues.apache.org/jira/browse/KAFKA-5322
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The AddPartitions request does not support partial failures. Either all 
> partitions are successfully added to the transaction or none of them are. 
> Currently we return a separate error code for each partition that was added 
> to the transaction, which begs the question of what error code to return if 
> we have not actually encountered a fatal partition-level error for some 
> partition. For example, suppose we send AddPartitions with partitions A and 
> B. If A is not authorized, we will not even attempt to add B to the 
> transaction, but what error code should we use. The current solution is to 
> only include partition A and its error code in the response, but this is a 
> little inconsistent with most other request types. Alternatives that have 
> been proposed:
> 1. Instead of a partition-level error, use one global error. We can add a 
> global error message to return friendlier details to the user about which 
> partition had a fault. The downside is that we would have to parse the 
> message contents if we wanted to do any partition-specific handling. We could 
> not easily fill the set of topics in {{TopicAuthorizationException}} for 
> example.
> 2. We can add a new error code to indicate that the broker did not even 
> attempt to add the partition to the transaction. For example: 
> OPERATION_NOT_ATTEMPTED or something like that. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3204: KAFKA-5322: Add an `OPERATION_NOT_ATTEMPTED` error...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3204


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5322) Resolve AddPartitions response error code inconsistency

2017-06-02 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5322:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3204
[https://github.com/apache/kafka/pull/3204]

> Resolve AddPartitions response error code inconsistency
> ---
>
> Key: KAFKA-5322
> URL: https://issues.apache.org/jira/browse/KAFKA-5322
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Apurva Mehta
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> The AddPartitions request does not support partial failures. Either all 
> partitions are successfully added to the transaction or none of them are. 
> Currently we return a separate error code for each partition that was added 
> to the transaction, which begs the question of what error code to return if 
> we have not actually encountered a fatal partition-level error for some 
> partition. For example, suppose we send AddPartitions with partitions A and 
> B. If A is not authorized, we will not even attempt to add B to the 
> transaction, but what error code should we use. The current solution is to 
> only include partition A and its error code in the response, but this is a 
> little inconsistent with most other request types. Alternatives that have 
> been proposed:
> 1. Instead of a partition-level error, use one global error. We can add a 
> global error message to return friendlier details to the user about which 
> partition had a fault. The downside is that we would have to parse the 
> message contents if we wanted to do any partition-specific handling. We could 
> not easily fill the set of topics in {{TopicAuthorizationException}} for 
> example.
> 2. We can add a new error code to indicate that the broker did not even 
> attempt to add the partition to the transaction. For example: 
> OPERATION_NOT_ATTEMPTED or something like that. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5370) Replace uses of old consumer with the new consumer

2017-06-02 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-5370:
--

 Summary: Replace uses of old consumer with the new consumer 
 Key: KAFKA-5370
 URL: https://issues.apache.org/jira/browse/KAFKA-5370
 Project: Kafka
  Issue Type: Improvement
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian
Priority: Minor


Where possible, use the new consumer In tools and tests instead of the old 
consumer, and remove the deprecation warning.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5236) Regression in on-disk log size when using Snappy compression with 0.8.2 log message format

2017-06-02 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5236:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3205
[https://github.com/apache/kafka/pull/3205]

> Regression in on-disk log size when using Snappy compression with 0.8.2 log 
> message format
> --
>
> Key: KAFKA-5236
> URL: https://issues.apache.org/jira/browse/KAFKA-5236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.1
>Reporter: Nick Travers
>Assignee: Ismael Juma
>  Labels: regression
> Fix For: 0.11.0.0
>
>
> We recently upgraded our brokers in our production environments from 0.10.1.1 
> to 0.10.2.1 and we've noticed a sizable regression in the on-disk .log file 
> size. For some deployments the increase was as much as 50%.
> We run our brokers with the 0.8.2 log message format version. The majority of 
> our message volume comes from 0.10.x Java clients sending messages encoded 
> with the Snappy codec.
> Some initial testing only shows a regression between the two versions when 
> using Snappy compression with a log message format of 0.8.2.
> I also tested 0.10.x log message formats as well as Gzip compression. The log 
> sizes do not differ in this case, so the issue seems confined to 0.8.2 
> message format and Snappy compression.
> A git-bisect lead me to this commit, which modified the server-side 
> implementation of `Record`:
> https://github.com/apache/kafka/commit/67f1e5b91bf073151ff57d5d656693e385726697
> Here's the PR, which has more context:
> https://github.com/apache/kafka/pull/2140
> Here is a link to the test I used to re-producer this issue:
> https://github.com/nicktrav/kafka/commit/68e8db4fa525e173651ac740edb270b0d90b8818
> cc: [~hachikuji] [~junrao] [~ijuma] [~guozhang] (tagged on original PR)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5368) Kafka Streams skipped-records-rate sensor produces nonzero values when the timestamps are valid

2017-06-02 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035429#comment-16035429
 ] 

Guozhang Wang commented on KAFKA-5368:
--

Sounds good to me. Maybe we can add one in StreamsMetricsImplTest? Also I think 
the newly added `Sum` could subsume the original `Total` class as well, but 
removing `Total` is not backward compatible, neither does changing `Total` to 
extend from SampledStat and replace `Sum`... Any ideas [~ijuma]?

> Kafka Streams skipped-records-rate sensor produces nonzero values when the 
> timestamps are valid
> ---
>
> Key: KAFKA-5368
> URL: https://issues.apache.org/jira/browse/KAFKA-5368
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Hamidreza Afzali
>Assignee: Hamidreza Afzali
> Fix For: 0.11.0.0
>
>
> Kafka Streams skipped-records-rate sensor produces nonzero values even when 
> the timestamps are valid and records are processed. The values are equal to 
> poll-rate.
> Related issue: KAFKA-5055 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3704) Improve mechanism for compression stream block size selection in KafkaProducer

2017-06-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035424#comment-16035424
 ] 

Ismael Juma commented on KAFKA-3704:


The Snappy block size for producer and broker will be 32 KB in 0.11.0.0. 
Similarly, the buffer size for Gzip was increased to 8KB.

> Improve mechanism for compression stream block size selection in KafkaProducer
> --
>
> Key: KAFKA-3704
> URL: https://issues.apache.org/jira/browse/KAFKA-3704
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Assignee: Ismael Juma
>
> As discovered in https://issues.apache.org/jira/browse/KAFKA-3565, the 
> current default block size (1K) used in Snappy and GZIP may cause a 
> sub-optimal compression ratio for Snappy, and hence reduce throughput. 
> Because we no longer recompress data in the broker, it also impacts what gets 
> stored on disk.
> A solution might be to use the default block size, which is 64K in LZ4, 32K 
> in Snappy and 0.5K in GZIP. The downside is that this solution will require 
> more memory allocated outside of the buffer pool and hence users may need to 
> bump up their JVM heap size, especially for MirrorMakers. Using Snappy as an 
> example, it's an additional 2x32k per batch (as Snappy uses two buffers) and 
> one would expect at least one batch per partition. However, the number of 
> batches per partition can be much higher if the broker is slow to acknowledge 
> producer requests (depending on `buffer.memory`, `batch.size`, message size, 
> etc.).
> Given the above, there are a few things that could be done (potentially more 
> than one):
> 1) A configuration for the producer compression stream buffer size.
> 2) Allocate buffers from the buffer pool and pass them to the compression 
> library. This is possible with Snappy and we could adapt our LZ4 code. It's 
> not possible with GZIP, but it uses a very small buffer by default.
> 3) DONE: Close the existing `RecordBatch.records` when we create a new 
> `RecordBatch` for the `TopicPartition` instead of doing it during 
> `RecordAccumulator.drain`. This would mean that we would only retain 
> resources for one `RecordBatch` per partition, which would improve the worst 
> case scenario significantly.
> Note that we decided that this change was too risky for 0.10.0.0 and reverted 
> the original attempt.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5148) Add configurable compression block size to the broker

2017-06-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035423#comment-16035423
 ] 

Ismael Juma commented on KAFKA-5148:


The Snappy block size for producer and broker will be 32 KB in 0.11.0.0. 
Similarly, the buffer size for Gzip was increased to 8KB.

> Add configurable compression block size to the broker
> -
>
> Key: KAFKA-5148
> URL: https://issues.apache.org/jira/browse/KAFKA-5148
> Project: Kafka
>  Issue Type: Improvement
>  Components: compression
>Reporter: Dustin Cote
> Fix For: 0.10.2.0
>
>
> Similar to the discussion in KAFKA-3704, we should consider a configurable 
> compression block size on the broker side. This especially considering the 
> change in block size from 32KB to 1KB in the 0.10.2 release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5359) Exceptions from RequestFuture lack parts of the stack trace

2017-06-02 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-5359:
---
Status: Patch Available  (was: Open)

> Exceptions from RequestFuture lack parts of the stack trace
> ---
>
> Key: KAFKA-5359
> URL: https://issues.apache.org/jira/browse/KAFKA-5359
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Magnus Reftel
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When an exception occurs within a task that reports its result using a 
> RequestFuture, that exception is stored in a field on the RequestFuture using 
> the {{raise}} method. In many places in the code where such futures are 
> completed, that exception is then thrown directly using {{throw 
> future.exception();}} (see e.g. 
> [Fetcher.getTopicMetadata|https://github.com/apache/kafka/blob/aebba89a2b9b5ea6a7cab2599555232ef3fe21ad/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L316]).
> This means that the exception that ends up in client code only has stack 
> traces related to the original exception, but nothing leading up to the 
> completion of the future. The client therefore gets no indication of what was 
> going on in the client code - only that it somehow ended up in the Kafka 
> libraries, and that a task failed at some point.
> One solution to this is to use the exceptions from the future as causes for 
> chained exceptions, so that the client gets a stack trace that shows what the 
> client was doing, in addition to getting the stack traces for the exception 
> in the task.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5359) Exceptions from RequestFuture lack parts of the stack trace

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035419#comment-16035419
 ] 

ASF GitHub Bot commented on KAFKA-5359:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/3213

KAFKA-5359: Make future exception the exception cause on the client side

Instead of throwing `future.exception()` on the client side, throw an 
exception with `future.exception()` as the cause. This is to better identify 
where on the client side the exception is thrown.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-5359

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3213.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3213


commit cee7244dc58fd48e53064f3abc6600a386b274c6
Author: Vahid Hashemian 
Date:   2017-06-02T20:06:21Z

KAFKA-5359: Make future exception the exception cause on the client side

Instead of throwing `future.exception()` on the client side, throw an 
exception with that `future.exception()` as the cause.
This is to better identify where on the client side the exception is thrown.




> Exceptions from RequestFuture lack parts of the stack trace
> ---
>
> Key: KAFKA-5359
> URL: https://issues.apache.org/jira/browse/KAFKA-5359
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Magnus Reftel
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When an exception occurs within a task that reports its result using a 
> RequestFuture, that exception is stored in a field on the RequestFuture using 
> the {{raise}} method. In many places in the code where such futures are 
> completed, that exception is then thrown directly using {{throw 
> future.exception();}} (see e.g. 
> [Fetcher.getTopicMetadata|https://github.com/apache/kafka/blob/aebba89a2b9b5ea6a7cab2599555232ef3fe21ad/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L316]).
> This means that the exception that ends up in client code only has stack 
> traces related to the original exception, but nothing leading up to the 
> completion of the future. The client therefore gets no indication of what was 
> going on in the client code - only that it somehow ended up in the Kafka 
> libraries, and that a task failed at some point.
> One solution to this is to use the exceptions from the future as causes for 
> chained exceptions, so that the client gets a stack trace that shows what the 
> client was doing, in addition to getting the stack traces for the exception 
> in the task.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3213: KAFKA-5359: Make future exception the exception ca...

2017-06-02 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/3213

KAFKA-5359: Make future exception the exception cause on the client side

Instead of throwing `future.exception()` on the client side, throw an 
exception with `future.exception()` as the cause. This is to better identify 
where on the client side the exception is thrown.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-5359

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3213.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3213


commit cee7244dc58fd48e53064f3abc6600a386b274c6
Author: Vahid Hashemian 
Date:   2017-06-02T20:06:21Z

KAFKA-5359: Make future exception the exception cause on the client side

Instead of throwing `future.exception()` on the client side, throw an 
exception with that `future.exception()` as the cause.
This is to better identify where on the client side the exception is thrown.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5236) Regression in on-disk log size when using Snappy compression with 0.8.2 log message format

2017-06-02 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035417#comment-16035417
 ] 

Ismael Juma commented on KAFKA-5236:


Thanks for verifying [~nickt].

> Regression in on-disk log size when using Snappy compression with 0.8.2 log 
> message format
> --
>
> Key: KAFKA-5236
> URL: https://issues.apache.org/jira/browse/KAFKA-5236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.1
>Reporter: Nick Travers
>Assignee: Ismael Juma
>  Labels: regression
> Fix For: 0.11.0.0
>
>
> We recently upgraded our brokers in our production environments from 0.10.1.1 
> to 0.10.2.1 and we've noticed a sizable regression in the on-disk .log file 
> size. For some deployments the increase was as much as 50%.
> We run our brokers with the 0.8.2 log message format version. The majority of 
> our message volume comes from 0.10.x Java clients sending messages encoded 
> with the Snappy codec.
> Some initial testing only shows a regression between the two versions when 
> using Snappy compression with a log message format of 0.8.2.
> I also tested 0.10.x log message formats as well as Gzip compression. The log 
> sizes do not differ in this case, so the issue seems confined to 0.8.2 
> message format and Snappy compression.
> A git-bisect lead me to this commit, which modified the server-side 
> implementation of `Record`:
> https://github.com/apache/kafka/commit/67f1e5b91bf073151ff57d5d656693e385726697
> Here's the PR, which has more context:
> https://github.com/apache/kafka/pull/2140
> Here is a link to the test I used to re-producer this issue:
> https://github.com/nicktrav/kafka/commit/68e8db4fa525e173651ac740edb270b0d90b8818
> cc: [~hachikuji] [~junrao] [~ijuma] [~guozhang] (tagged on original PR)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5019) Exactly-once upgrade notes

2017-06-02 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5019:
---
Status: Patch Available  (was: In Progress)

> Exactly-once upgrade notes
> --
>
> Key: KAFKA-5019
> URL: https://issues.apache.org/jira/browse/KAFKA-5019
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: documentation, exactly-once
> Fix For: 0.11.0.0
>
>
> We have added some basic upgrade notes, but we need to flesh them out. We 
> should cover every item that has compatibility implications as well new and 
> updated protocol APIs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5236) Regression in on-disk log size when using Snappy compression with 0.8.2 log message format

2017-06-02 Thread Nick Travers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035404#comment-16035404
 ] 

Nick Travers commented on KAFKA-5236:
-

Thanks for the patch [~ijuma]! I ran the same tests I used originally and 
confirmed that there was no regression in the on-disk size.

> Regression in on-disk log size when using Snappy compression with 0.8.2 log 
> message format
> --
>
> Key: KAFKA-5236
> URL: https://issues.apache.org/jira/browse/KAFKA-5236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.1
>Reporter: Nick Travers
>Assignee: Ismael Juma
>  Labels: regression
> Fix For: 0.11.0.0
>
>
> We recently upgraded our brokers in our production environments from 0.10.1.1 
> to 0.10.2.1 and we've noticed a sizable regression in the on-disk .log file 
> size. For some deployments the increase was as much as 50%.
> We run our brokers with the 0.8.2 log message format version. The majority of 
> our message volume comes from 0.10.x Java clients sending messages encoded 
> with the Snappy codec.
> Some initial testing only shows a regression between the two versions when 
> using Snappy compression with a log message format of 0.8.2.
> I also tested 0.10.x log message formats as well as Gzip compression. The log 
> sizes do not differ in this case, so the issue seems confined to 0.8.2 
> message format and Snappy compression.
> A git-bisect lead me to this commit, which modified the server-side 
> implementation of `Record`:
> https://github.com/apache/kafka/commit/67f1e5b91bf073151ff57d5d656693e385726697
> Here's the PR, which has more context:
> https://github.com/apache/kafka/pull/2140
> Here is a link to the test I used to re-producer this issue:
> https://github.com/nicktrav/kafka/commit/68e8db4fa525e173651ac740edb270b0d90b8818
> cc: [~hachikuji] [~junrao] [~ijuma] [~guozhang] (tagged on original PR)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5019) Exactly-once upgrade notes

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035403#comment-16035403
 ] 

ASF GitHub Bot commented on KAFKA-5019:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3212

KAFKA-5019: Upgrades notes for idempotent/transactional features and new 
message format



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5019

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3212.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3212


commit 84390c7a6c65e78d01e7eb164a65e27d1cd03adb
Author: Jason Gustafson 
Date:   2017-06-02T20:49:32Z

KAFKA-5019: Upgrades notes for idempotent/transactional features and new 
message format




> Exactly-once upgrade notes
> --
>
> Key: KAFKA-5019
> URL: https://issues.apache.org/jira/browse/KAFKA-5019
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: documentation, exactly-once
> Fix For: 0.11.0.0
>
>
> We have added some basic upgrade notes, but we need to flesh them out. We 
> should cover every item that has compatibility implications as well new and 
> updated protocol APIs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3212: KAFKA-5019: Upgrades notes for idempotent/transact...

2017-06-02 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3212

KAFKA-5019: Upgrades notes for idempotent/transactional features and new 
message format



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5019

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3212.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3212


commit 84390c7a6c65e78d01e7eb164a65e27d1cd03adb
Author: Jason Gustafson 
Date:   2017-06-02T20:49:32Z

KAFKA-5019: Upgrades notes for idempotent/transactional features and new 
message format




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2017-06-02 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035365#comment-16035365
 ] 

Onur Karaman commented on KAFKA-1595:
-

Sounds good.

> Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
> -
>
> Key: KAFKA-1595
> URL: https://issues.apache.org/jira/browse/KAFKA-1595
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Jagbir
>Assignee: Ismael Juma
>  Labels: newbie
>
> The following issue is created as a follow up suggested by Jun Rao
> in a kafka news group message with the Subject
> "Blocking Recursive parsing from 
> kafka.consumer.TopicCount$.constructTopicCount"
> SUMMARY:
> An issue was detected in a typical cluster of 3 kafka instances backed
> by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
> java version 1.7.0_65). On consumer end, when consumers get recycled,
> there is a troubling JSON parsing recursion which takes a busy lock and
> blocks consumers thread pool.
> In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
> a global lock (0xd3a7e1d0) during the rebalance, and fires an
> expensive JSON parsing, while keeping the other consumers from shutting
> down, see, e.g,
> at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
> The deep recursive JSON parsing should be deprecated in favor
> of a better JSON parser, see, e.g,
> http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
> DETAILS:
> The first dump is for a recursive blocking thread holding the lock for 
> 0xd3a7e1d0
> and the subsequent dump is for a waiting thread.
> (Please grep for 0xd3a7e1d0 to see the locked object.)
> Â 
> -8<-
> "Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor"
> prio=10 tid=0x7f24dc285800 nid=0xda9 runnable [0x7f249e40b000]
> java.lang.Thread.State: RUNNABLE
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.p$7(Parsers.scala:722)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.continue$1(Parsers.scala:726)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:737)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:721)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Success.flatMapWithNext(Parsers.scala:142)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> 

[jira] [Commented] (KAFKA-5236) Regression in on-disk log size when using Snappy compression with 0.8.2 log message format

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035353#comment-16035353
 ] 

ASF GitHub Bot commented on KAFKA-5236:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3205


> Regression in on-disk log size when using Snappy compression with 0.8.2 log 
> message format
> --
>
> Key: KAFKA-5236
> URL: https://issues.apache.org/jira/browse/KAFKA-5236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.1
>Reporter: Nick Travers
>Assignee: Ismael Juma
>  Labels: regression
> Fix For: 0.11.0.0
>
>
> We recently upgraded our brokers in our production environments from 0.10.1.1 
> to 0.10.2.1 and we've noticed a sizable regression in the on-disk .log file 
> size. For some deployments the increase was as much as 50%.
> We run our brokers with the 0.8.2 log message format version. The majority of 
> our message volume comes from 0.10.x Java clients sending messages encoded 
> with the Snappy codec.
> Some initial testing only shows a regression between the two versions when 
> using Snappy compression with a log message format of 0.8.2.
> I also tested 0.10.x log message formats as well as Gzip compression. The log 
> sizes do not differ in this case, so the issue seems confined to 0.8.2 
> message format and Snappy compression.
> A git-bisect lead me to this commit, which modified the server-side 
> implementation of `Record`:
> https://github.com/apache/kafka/commit/67f1e5b91bf073151ff57d5d656693e385726697
> Here's the PR, which has more context:
> https://github.com/apache/kafka/pull/2140
> Here is a link to the test I used to re-producer this issue:
> https://github.com/nicktrav/kafka/commit/68e8db4fa525e173651ac740edb270b0d90b8818
> cc: [~hachikuji] [~junrao] [~ijuma] [~guozhang] (tagged on original PR)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3205: KAFKA-5236; Increase the block/buffer size when co...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3205


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5368) Kafka Streams skipped-records-rate sensor produces nonzero values when the timestamps are valid

2017-06-02 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035344#comment-16035344
 ] 

Matthias J. Sax commented on KAFKA-5368:


Could we add a test for this? I think this issue was introduce by code 
refactoring and slipped because of a missing test...

> Kafka Streams skipped-records-rate sensor produces nonzero values when the 
> timestamps are valid
> ---
>
> Key: KAFKA-5368
> URL: https://issues.apache.org/jira/browse/KAFKA-5368
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Hamidreza Afzali
>Assignee: Hamidreza Afzali
> Fix For: 0.11.0.0
>
>
> Kafka Streams skipped-records-rate sensor produces nonzero values even when 
> the timestamps are valid and records are processed. The values are equal to 
> poll-rate.
> Related issue: KAFKA-5055 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3982) Issue with processing order of consumer properties in console consumer

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035298#comment-16035298
 ] 

ASF GitHub Bot commented on KAFKA-3982:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1655


> Issue with processing order of consumer properties in console consumer
> --
>
> Key: KAFKA-3982
> URL: https://issues.apache.org/jira/browse/KAFKA-3982
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> With the recent introduction of {{consumer.property}} argument in console 
> consumer, both new and old consumer could overwrite certain properties 
> provided using this new argument.
> Specifically, the old consumer would overwrite the values provided for 
> [{{auto.offset.reset}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L173]
>  and 
> [{{zookeeper.connect}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L174],
>  and the new consumer would overwrite the values provided for 
> [{{auto.offset.reset}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L196],
>  
> [{{bootstrap.servers}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L197],
>  
> [{{key.deserializer}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L198],
>  and 
> [{{key.deserializer}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L199].
> For example, running the old consumer as {{bin/kafka-console-consumer.sh 
> --zookeeper localhost:2181 --topic foo --consumer-property 
> auto.offset.reset=none}} the value that's eventually selected for 
> {{auto.offset.reset}} will be {{largest}}, overwriting what the user provides 
> in the command line.
> This seems to be because the properties provided via {{consumer.property}} 
> argument are not considered when finalizing the configuration of the consumer.
> Some properties can now be provided in three different places (directly in 
> the command line, via the {{consumer.property}} argument, and via the 
> {{consumer.config}} argument, in the same order of precedence).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #1655: KAFKA-3982: Fix processing order of some of the co...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1655


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3982) Issue with processing order of consumer properties in console consumer

2017-06-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3982:
-
   Resolution: Fixed
Fix Version/s: 0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1655
[https://github.com/apache/kafka/pull/1655]

> Issue with processing order of consumer properties in console consumer
> --
>
> Key: KAFKA-3982
> URL: https://issues.apache.org/jira/browse/KAFKA-3982
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> With the recent introduction of {{consumer.property}} argument in console 
> consumer, both new and old consumer could overwrite certain properties 
> provided using this new argument.
> Specifically, the old consumer would overwrite the values provided for 
> [{{auto.offset.reset}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L173]
>  and 
> [{{zookeeper.connect}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L174],
>  and the new consumer would overwrite the values provided for 
> [{{auto.offset.reset}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L196],
>  
> [{{bootstrap.servers}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L197],
>  
> [{{key.deserializer}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L198],
>  and 
> [{{key.deserializer}}|https://github.com/apache/kafka/blob/10bbffd75439e10fe9db6cf0aa48a7da7e386ef3/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L199].
> For example, running the old consumer as {{bin/kafka-console-consumer.sh 
> --zookeeper localhost:2181 --topic foo --consumer-property 
> auto.offset.reset=none}} the value that's eventually selected for 
> {{auto.offset.reset}} will be {{largest}}, overwriting what the user provides 
> in the command line.
> This seems to be because the properties provided via {{consumer.property}} 
> argument are not considered when finalizing the configuration of the consumer.
> Some properties can now be provided in three different places (directly in 
> the command line, via the {{consumer.property}} argument, and via the 
> {{consumer.config}} argument, in the same order of precedence).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5368) Kafka Streams skipped-records-rate sensor produces nonzero values when the timestamps are valid

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035281#comment-16035281
 ] 

ASF GitHub Bot commented on KAFKA-5368:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3206


> Kafka Streams skipped-records-rate sensor produces nonzero values when the 
> timestamps are valid
> ---
>
> Key: KAFKA-5368
> URL: https://issues.apache.org/jira/browse/KAFKA-5368
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Hamidreza Afzali
>Assignee: Hamidreza Afzali
> Fix For: 0.11.0.0
>
>
> Kafka Streams skipped-records-rate sensor produces nonzero values even when 
> the timestamps are valid and records are processed. The values are equal to 
> poll-rate.
> Related issue: KAFKA-5055 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3206: KAFKA-5368 Kafka Streams skipped-records-rate sens...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3206


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5368) Kafka Streams skipped-records-rate sensor produces nonzero values when the timestamps are valid

2017-06-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5368.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 3206
[https://github.com/apache/kafka/pull/3206]

> Kafka Streams skipped-records-rate sensor produces nonzero values when the 
> timestamps are valid
> ---
>
> Key: KAFKA-5368
> URL: https://issues.apache.org/jira/browse/KAFKA-5368
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Hamidreza Afzali
>Assignee: Hamidreza Afzali
> Fix For: 0.11.0.0
>
>
> Kafka Streams skipped-records-rate sensor produces nonzero values even when 
> the timestamps are valid and records are processed. The values are equal to 
> poll-rate.
> Related issue: KAFKA-5055 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5327) Console Consumer should only poll for up to max messages

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035261#comment-16035261
 ] 

ASF GitHub Bot commented on KAFKA-5327:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3148


> Console Consumer should only poll for up to max messages
> 
>
> Key: KAFKA-5327
> URL: https://issues.apache.org/jira/browse/KAFKA-5327
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dustin Cote
>Assignee: huxi
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> The ConsoleConsumer has a --max-messages flag that can be used to limit the 
> number of messages consumed. However, the number of records actually consumed 
> is governed by max.poll.records. This means you see one message on the 
> console, but your offset has moved forward a default of 500, which is kind of 
> counterintuitive. It would be good to only commit offsets for messages we 
> have printed to the console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3148: KAFKA-5327: ConsoleConsumer should manually commit...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3148


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5327) Console Consumer should only poll for up to max messages

2017-06-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5327.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 3148
[https://github.com/apache/kafka/pull/3148]

> Console Consumer should only poll for up to max messages
> 
>
> Key: KAFKA-5327
> URL: https://issues.apache.org/jira/browse/KAFKA-5327
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dustin Cote
>Assignee: huxi
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> The ConsoleConsumer has a --max-messages flag that can be used to limit the 
> number of messages consumed. However, the number of records actually consumed 
> is governed by max.poll.records. This means you see one message on the 
> console, but your offset has moved forward a default of 500, which is kind of 
> counterintuitive. It would be good to only commit offsets for messages we 
> have printed to the console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3211: Minor: make flush no-op as we don't need to call f...

2017-06-02 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/3211

Minor: make flush no-op as we don't need to call flush on commit.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka MINOR_no_flush_on_commit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3211.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3211


commit 7fd20819b250477e4e4f36fe0da60e0ff74a1403
Author: Bill Bejeck 
Date:   2017-06-02T18:12:52Z

Minor: make flush no-op as we don't need to call flush on commit.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5292) Update authorization checks in AdminClient and add authorization tests

2017-06-02 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-5292:
---
Summary: Update authorization checks in AdminClient and add authorization 
tests  (was: Authorization tests for AdminClient)

> Update authorization checks in AdminClient and add authorization tests
> --
>
> Key: KAFKA-5292
> URL: https://issues.apache.org/jira/browse/KAFKA-5292
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Colin P. McCabe
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> AuthorizerIntegrationTest includes protocol, consumer and producer tests. We 
> should add tests for the AdminClient as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5366) Add cases for concurrent transactional reads and writes in system tests

2017-06-02 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5366:

Labels: exactly-once  (was: )

> Add cases for concurrent transactional reads and writes in system tests
> ---
>
> Key: KAFKA-5366
> URL: https://issues.apache.org/jira/browse/KAFKA-5366
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.11.0.0
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>  Labels: exactly-once
> Fix For: 0.11.1.0
>
>
> Currently the transactions system test does a transactional copy while 
> bouncing brokers and clients, and then does a verifying read on the output 
> topic to ensure that it exactly matches the input. 
> We should also have a transactional consumer reading the tail of the output 
> topic as the writes are happening, and then assert that the values _it_ reads 
> also exactly match the values in the source topics. 
> This test really exercises the abort index, and we don't have any of them in 
> the system or integration tests right now. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5164) SetSchemaMetadata does not replace the schemas in structs correctly

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035053#comment-16035053
 ] 

ASF GitHub Bot commented on KAFKA-5164:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3198


> SetSchemaMetadata does not replace the schemas in structs correctly
> ---
>
> Key: KAFKA-5164
> URL: https://issues.apache.org/jira/browse/KAFKA-5164
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Randall Hauch
> Fix For: 0.11.0.0, 0.11.1.0
>
>
> In SetSchemaMetadataTest we verify that the name and version of the schema in 
> the record have been replaced:
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/test/java/org/apache/kafka/connect/transforms/SetSchemaMetadataTest.java#L62
> However, in the case of Structs, the schema will be attached to both the 
> record and the Struct itself. So we correctly rebuild the Record:
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L77
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L104
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L119
> But if the key or value are a struct, they will still contain the old schema 
> embedded in the struct.
> Ultimately this can lead to validations in other code failing (even for very 
> simple changes like adjusting the name of a schema):
> {code}
> (org.apache.kafka.connect.runtime.WorkerTask:141)
> org.apache.kafka.connect.errors.DataException: Mismatching struct schema
> at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:471)
> at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:295)
> at 
> io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:73)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:196)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:167)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The solution to this is probably to check whether we're dealing with a Struct 
> when we use a new schema and potentially copy/reallocate it.
> This particular issue would only appear if we don't modify the data, so I 
> think SetSchemaMetadata is currently the only transformation that would have 
> the issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3198: KAFKA-5164 Ensure SetSchemaMetadata updates key or...

2017-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3198


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5164) SetSchemaMetadata does not replace the schemas in structs correctly

2017-06-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-5164:
-
   Resolution: Fixed
Fix Version/s: 0.11.1.0
   0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3198
[https://github.com/apache/kafka/pull/3198]

> SetSchemaMetadata does not replace the schemas in structs correctly
> ---
>
> Key: KAFKA-5164
> URL: https://issues.apache.org/jira/browse/KAFKA-5164
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Randall Hauch
> Fix For: 0.11.0.0, 0.11.1.0
>
>
> In SetSchemaMetadataTest we verify that the name and version of the schema in 
> the record have been replaced:
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/test/java/org/apache/kafka/connect/transforms/SetSchemaMetadataTest.java#L62
> However, in the case of Structs, the schema will be attached to both the 
> record and the Struct itself. So we correctly rebuild the Record:
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L77
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L104
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L119
> But if the key or value are a struct, they will still contain the old schema 
> embedded in the struct.
> Ultimately this can lead to validations in other code failing (even for very 
> simple changes like adjusting the name of a schema):
> {code}
> (org.apache.kafka.connect.runtime.WorkerTask:141)
> org.apache.kafka.connect.errors.DataException: Mismatching struct schema
> at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:471)
> at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:295)
> at 
> io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:73)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:196)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:167)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The solution to this is probably to check whether we're dealing with a Struct 
> when we use a new schema and potentially copy/reallocate it.
> This particular issue would only appear if we don't modify the data, so I 
> think SetSchemaMetadata is currently the only transformation that would have 
> the issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5369) Producer hangs on metadata fetch from broker in security authorization related failures and no meaningful exception is thrown

2017-06-02 Thread Koelli Mungee (JIRA)
Koelli Mungee created KAFKA-5369:


 Summary: Producer hangs on metadata fetch from broker in security 
authorization related failures and no meaningful exception is thrown
 Key: KAFKA-5369
 URL: https://issues.apache.org/jira/browse/KAFKA-5369
 Project: Kafka
  Issue Type: Improvement
  Components: consumer, producer 
Affects Versions: 0.10.0.0
Reporter: Koelli Mungee


Debugging security related misconfigurations becomes painful since the only 
symptom is a hang trying to fetch the metadata on the producer side

at java.lang.Object.wait(Native Method) 
at org.apache.kafka.clients.Metadata.awaitUpdate(Metadata.java:129) 
at 
org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:528)
 
at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:441) 
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430) 
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)

There are no meaningful errors on the broker side either. Enabling 
Djavax.net.debug=all and/or Dsun.security.krb5.debug=true is an option, however 
this should be improved to get a meaningful error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-138: Change punctuate semantics

2017-06-02 Thread Michal Borowiecki

Hi Matthias,

Apologies, somehow I totally missed this email earlier.

Wrt ValueTransformer, I added it to the the list of deprecated methods 
(PR is up to date).


Wrt Cancellable vs Cancelable:

I'm not fluent enough to have spotted this nuance, but having googled 
for it, you are right.


On the other hand however, the precedent seems to have been set by 
java.util.concurrent.Cancellable and akka for instance followed that 
with akka.actor.Cancellable.


Given established heritage in computing context, I'd err on the side of 
consistency with prior practice.


Unless anyone has strong opinions on this matter?


Thanks,

Michal


On 04/05/17 20:43, Matthias J. Sax wrote:

Hi,

thanks for updating the KIP. Looks good to me overall.

I think adding `Cancellable` (or should it be `Cancelable` to follow
American English?) is a clean solution, in contrast to the proposed
alternative.

One minor comment: can you add `ValueTransformer#punctuate()` to the
list of deprecated methods?


-Matthias



On 5/4/17 1:41 AM, Michal Borowiecki wrote:

Further in this direction I've updated the main proposal to incorporate
the Cancellable return type for ProcessorContext.schedule and the
guidance on how to implement "hybrid" punctuation with the proposed 2
PunctuationTypes.

I look forward to more comments whether the Cancallable return type is
an agreeable solution and it's precise definition.

I shall move all alternatives other than the main proposal into the
Rejected Alternatives section and if I hear any objections, I'll move
those back up and we'll discuss further.


Looking forward to all comments and suggestions.


Thanks,

Michal


On 01/05/17 18:23, Michal Borowiecki wrote:

Hi all,

As promised, here is my take at how one could implement the previously
discussed hybrid semantics using the 2 PunctuationType callbacks (one
for STREAM_TIME and one for SYSTEM_TIME).

However, there's a twist.

Since currently calling context.schedule() adds a new
PunctuationSchedule and does not overwrite the previous one, a slight
change would be required:

a) either that PuncuationSchedules are cancellable

b) or that calling schedule() ||overwrites(cancels) the previous one
with the given |PunctuationType |(but that's not how it works currently)


Below is an example assuming approach a) is implemented by having
schedule return Cancellable instead of void.

|ProcessorContext context;|
|long| |streamTimeInterval = ...;|
|long| |systemTimeUpperBound = ...; ||//e.g. systemTimeUpperBound =
streamTimeInterval + some tolerance|
|Cancellable streamTimeSchedule;|
|Cancellable systemTimeSchedule;|
|long| |lastStreamTimePunctation = -||1||;|
| |
|public| |void| |init(ProcessorContext context){|
|||this||.context = context;|
|||streamTimeSchedule =
context.schedule(PunctuationType.STREAM_TIME,
streamTimeInterval,   ||this||::streamTimePunctuate);|
|||systemTimeSchedule =
context.schedule(PunctuationType.SYSTEM_TIME,
systemTimeUpperBound, ||this||::systemTimePunctuate);   |
|}|
| |
|public| |void| |streamTimePunctuate(||long| |streamTime){|
|||periodicBusiness(streamTime);|
  
|||systemTimeSchedule.cancel();|

|||systemTimeSchedule =
context.schedule(PunctuationType.SYSTEM_TIME,
systemTimeUpperBound, ||this||::systemTimePunctuate);|
|}|
| |
|public| |void| |systemTimePunctuate(||long| |systemTime){|
|||periodicBusiness(context.timestamp());|
  
|||streamTimeSchedule.cancel();|

|||streamTimeSchedule =
context.schedule(PunctuationType.STREAM_TIME,
streamTimeInterval, ||this||::streamTimePunctuate);|
|}|
| |
|public| |void| |periodicBusiness(||long| |streamTime){|
|||// guard against streamTime == -1, easy enough.|
|||// if you need system time instead, just use
System.currentTimeMillis()|
| |
|||// do something businessy here|
|}|

Where Cancellable is either an interface containing just a single void
cancel() method or also boolean isCancelled() like here
.


Please let your opinions known whether we should proceed in this
direction or leave "hybrid" considerations out of scope.

Looking forward to hearing your thoughts.

Thanks,
Michal

On 30/04/17 20:07, Michal Borowiecki wrote:

Hi Matthias,

I'd like to start moving the discarded ideas into Rejected
Alternatives section. Before I do, I want to tidy them up, ensure
they've each been given proper treatment.

To that end let me go back to one of your earlier comments about the
original suggestion (A) to put that to bed.


On 04/04/17 06:44, Matthias J. Sax wrote:

(A) You argue, that users can still "punctuate" on event-time via
process(), but I am not sure if this is possible. Note, that users only
get record timestamps via context.timestamp(). Thus, users would need to
track the time progress per partition (based on the partitions they
obverse via context.partition(). (This alone puts a huge burden on the
user by itself.) However, users are not notified at startup what

Re: [DISCUSS] KIP-167: Add a restoreAll method to StateRestoreCallback

2017-06-02 Thread Bill Bejeck
Guozhang, Matthias,

Thanks for the comments.  I have updated the KIP, (JIRA title and
description as well).

I had thought about introducing a separate interface altogether, but
extending the current one makes more sense.

As for intermediate callbacks based on time or number of records, I think
the latest update to the KIP addresses this point of querying for
intermediate results, but it would be per batch restored.

Thanks,
Bill





On Fri, Jun 2, 2017 at 8:36 AM, Jim Jagielski  wrote:

>
> > On Jun 2, 2017, at 12:54 AM, Matthias J. Sax 
> wrote:
> >
> > With regard to backward compatibility, we should not change the current
> > interface, but add a new interface that extends the current one.
> >
>
> ++1
>
>


Re: 0.11.0.0 Release Update

2017-06-02 Thread Michal Borowiecki

Hi all,

So will Exactly Once Semantics be reason enough to bump version to 1.0? 
Or is the leading zero here to stay indefinitely? :-)


Cheers,

Michal


On 05/05/17 04:28, Ismael Juma wrote:

Hi all,

We're quickly approaching our next time-based release. If you missed any of
the updates on the new time-based releases we'll be following, see
https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan
for an explanation.

The release plan can be found in the usual location:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.0

Here are the important dates (also documented in the wiki):

- KIP Freeze: May 10, 2017 (a KIP must be accepted by this date in order
to be considered for this release)
- Feature Freeze: May 17, 2017 (major features merged & working on
stabilization, minor features have PR, release branch cut; anything not in
this state will be automatically moved to the next release in JIRA)
- Code Freeze: May 31, 2017 (first RC created now)
- Release: June 14, 2017

There are a couple of changes based on Ewen's feedback as release manager
for 0.10.2.0:

1. We now have a KIP freeze one week before the feature freeze to avoid
the risky and confusing situation where some KIPs are being discussed,
voted on and merged all in the same week.
2. All the dates were moved from Friday to Wednesday so that release
management doesn't spill over to the weekend.

KIPs: we have 24 adopted with 10 already committed and 10 with patches in
flight. The feature freeze is 12 days away so we have a lot of reviewing to
do, but significant changes have been merged already.

Open JIRAs: As usual, we have a lot!

*https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%200.11.0.0%20ORDER%20BY%20priority%20DESC
*

146 at the moment. As we get nearer to the feature freeze, I will start
moving JIRAs out of this release.

* Closed JIRAs: So far ~191 closed tickets for 0.11.0.0:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%200.11.0.0%20ORDER%20BY%20priority%20DESC

* Release features:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.0 has
a "Release Features" section that will be included with the release
notes/email for the release. I added some items to get it going. Please add
to
this list anything you think is worth noting.

I'll plan to give another update next week just before the KIP freeze.

Ismael



--
Signature
 Michal Borowiecki
Senior Software Engineer L4
T:  +44 208 742 1600


+44 203 249 8448



E:  michal.borowie...@openbet.com
W:  www.openbet.com 


OpenBet Ltd

Chiswick Park Building 9

566 Chiswick High Rd

London

W4 5XT

UK




This message is confidential and intended only for the addressee. If you 
have received this message in error, please immediately notify the 
postmas...@openbet.com  and delete it 
from your system as well as any copies. The content of e-mails as well 
as traffic data may be monitored by OpenBet for employment and security 
purposes. To protect the environment please do not print this e-mail 
unless necessary. OpenBet Ltd. Registered Office: Chiswick Park Building 
9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A company 
registered in England and Wales. Registered no. 3134634. VAT no. 
GB927523612




[jira] [Updated] (KAFKA-5363) Add ability to batch restore and receive restoration stats.

2017-06-02 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-5363:
---
Description: 
Currently, when restoring a state store in a Kafka Streams application, we put 
one key-value at a time into the store.  

This task aims to make this recovery more efficient by creating a new interface 
with "restoreAll" functionality allowing for bulk writes by the underlying 
state store implementation.  

The proposal will also add "beginRestore" and "endRestore" callback methods 
potentially used for 
Tracking when the bulk restoration process begins and ends.
Keeping track of the number of records and last offset restored.



KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-167%3A+Add+a+restoreAll+method+to+StateRestoreCallback

  was:
Add a new method {{restoreAll(List> records)}} to the 
{{StateRestoreCallback}} to enable bulk writing to the underlying state store 
vs individual {{restore(byte[] key, byte[] value)}} resulting in quicker 
restore times.

KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-167%3A+Add+a+restoreAll+method+to+StateRestoreCallback


> Add ability to batch restore and receive restoration stats.
> ---
>
> Key: KAFKA-5363
> URL: https://issues.apache.org/jira/browse/KAFKA-5363
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>  Labels: kip
> Fix For: 0.11.1.0
>
>
> Currently, when restoring a state store in a Kafka Streams application, we 
> put one key-value at a time into the store.  
> This task aims to make this recovery more efficient by creating a new 
> interface with "restoreAll" functionality allowing for bulk writes by the 
> underlying state store implementation.  
> The proposal will also add "beginRestore" and "endRestore" callback methods 
> potentially used for 
> Tracking when the bulk restoration process begins and ends.
> Keeping track of the number of records and last offset restored.
> KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-167%3A+Add+a+restoreAll+method+to+StateRestoreCallback



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5363) Add ability to batch restore and receive restoration stats.

2017-06-02 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-5363:
---
Summary: Add ability to batch restore and receive restoration stats.  (was: 
Add restoreAll functionality to StateRestoreCallback)

> Add ability to batch restore and receive restoration stats.
> ---
>
> Key: KAFKA-5363
> URL: https://issues.apache.org/jira/browse/KAFKA-5363
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>  Labels: kip
> Fix For: 0.11.1.0
>
>
> Add a new method {{restoreAll(List> records)}} to 
> the {{StateRestoreCallback}} to enable bulk writing to the underlying state 
> store vs individual {{restore(byte[] key, byte[] value)}} resulting in 
> quicker restore times.
> KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-167%3A+Add+a+restoreAll+method+to+StateRestoreCallback



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Android app produces data in Kafka

2017-06-02 Thread Mireia

> Hi.
> 
> I am going to do a project in the University in order to finish my master in 
> IoT.
> 
> I need to know if it is posible connect an android app with Kafka. I want 
> that my mobile android takes data with its sensors and produces the data in 
> Kafka logs.
> If that is posible, where can I find documentation about it?
> 
> Thanks. 
> 
> Regards,
> 
> Mireya B. de Miguel Álvarez



[jira] [Commented] (KAFKA-5272) Improve validation for Describe/Alter Configs (KIP-133)

2017-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034818#comment-16034818
 ] 

ASF GitHub Bot commented on KAFKA-5272:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3210

[WIP] KAFKA-5272: Improve validation for Describe/Alter Configs (KIP-133)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-5272-improve-validation-for-describe-alter-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3210.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3210


commit a944c7b5727ebddec940cbed0ca1622b4f0b4016
Author: Ismael Juma 
Date:   2017-06-01T23:21:56Z

KAFKA-5272: Add AlterConfigPolicy




> Improve validation for Describe/Alter Configs (KIP-133)
> ---
>
> Key: KAFKA-5272
> URL: https://issues.apache.org/jira/browse/KAFKA-5272
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> TopicConfigHandler.processConfigChanges() warns about certain topic configs. 
> We should include such validations in alter configs and reject the change if 
> the validation fails. Note that this should be done without changing the 
> behaviour of the ConfigCommand (as it does not have access to the broker 
> configs).
> We should consider adding other validations like KAFKA-4092 and KAFKA-4680.
> Finally, the AlterConfigsPolicy mentioned in KIP-133 will be added at the 
> same time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3210: [WIP] KAFKA-5272: Improve validation for Describe/...

2017-06-02 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3210

[WIP] KAFKA-5272: Improve validation for Describe/Alter Configs (KIP-133)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-5272-improve-validation-for-describe-alter-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3210.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3210


commit a944c7b5727ebddec940cbed0ca1622b4f0b4016
Author: Ismael Juma 
Date:   2017-06-01T23:21:56Z

KAFKA-5272: Add AlterConfigPolicy




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5345) Some socket connections not closed after restart of Kafka Streams

2017-06-02 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-5345:
--
Fix Version/s: 0.10.2.2

> Some socket connections not closed after restart of Kafka Streams
> -
>
> Key: KAFKA-5345
> URL: https://issues.apache.org/jira/browse/KAFKA-5345
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
> Environment: MacOs 10.12.5 and Ubuntu 14.04
>Reporter: Jeroen van Wilgenburg
>Assignee: Rajini Sivaram
> Fix For: 0.11.0.0, 0.10.2.2
>
>
> We ran into a problem that resulted in a "Too many open files" exception 
> because some sockets are not closed after a restart.
> This problem only occurs with version {{0.10.2.1}} and {{0.10.2.0}}. 
> {{0.10.1.1}} and {{0.10.1.0}} both work as expected.
> I used the same version for the server and client.
> I used https://github.com/kohsuke/file-leak-detector to display the open file 
> descriptors. The culprit was :
> {noformat}
> #146 socket channel by thread:pool-2-thread-1 on Mon May 29 11:20:47 CEST 2017
>   at sun.nio.ch.SocketChannelImpl.(SocketChannelImpl.java:108)
>   at 
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
>   at java.nio.channels.SocketChannel.open(SocketChannel.java:145)
>   at org.apache.kafka.common.network.Selector.connect(Selector.java:168)
>   at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:629)
>   at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:186)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsKafkaClient.ensureOneNodeIsReady(StreamsKafkaClient.java:195)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsKafkaClient.getAnyReadyBrokerId(StreamsKafkaClient.java:233)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsKafkaClient.checkBrokerCompatibility(StreamsKafkaClient.java:300)
>   at 
> org.apache.kafka.streams.KafkaStreams.checkBrokerVersionCompatibility(KafkaStreams.java:401)
>   at org.apache.kafka.streams.KafkaStreams.start(KafkaStreams.java:425)
> {noformat}
>   
>   
> I could narrow the problem down to a reproducable example below (the only 
> dependency is 
> {{org.apache.kafka:kafka-streams:jar:0.10.2.1}}). 
> *IMPORTANT*: You have to run this code in the Intellij IDEA debugger with a 
> special breakpoint to see it fail. 
> See the comments on the socketChannels variable on how to add this 
> breakpoint. 
> When you run this code you will see the number of open SocketChannels 
> increase (only on version 0.10.2.x).
>   
> {code:title=App.java}
> import org.apache.kafka.common.serialization.Serdes;
> import org.apache.kafka.streams.KafkaStreams;
> import org.apache.kafka.streams.StreamsConfig;
> import org.apache.kafka.streams.kstream.KStream;
> import org.apache.kafka.streams.kstream.KStreamBuilder;
> import java.nio.channels.SocketChannel;
> import java.nio.channels.spi.AbstractInterruptibleChannel;
> import java.util.ArrayList;
> import java.util.List;
> import java.util.Properties;
> import java.util.concurrent.Executors;
> import java.util.concurrent.ScheduledExecutorService;
> import java.util.concurrent.TimeUnit;
> import java.util.stream.Collectors;
> public class App {
> private static KafkaStreams streams;
> private static String brokerList;
> // Fill socketChannels with entries on line 'Socket socket = 
> socketChannel.socket();' (line number 170  on 0.10.2.1)
> // of org.apache.kafka.common.network.Selector: Add breakpoint, right 
> click on breakpoint.
> // - Uncheck 'Suspend'
> // - Check 'Evaluate and log' and fill text field with (without quotes) 
> 'App.socketChannels.add(socketChannel)'
> private static final List socketChannels = new 
> ArrayList<>();
> public static void main(String[] args) {
> brokerList = args[0];
> init();
> ScheduledExecutorService scheduledThreadPool = 
> Executors.newScheduledThreadPool(1);
> Runnable command = () -> {
> streams.close();
> System.out.println("Open socketChannels: " + 
> socketChannels.stream()
> .filter(AbstractInterruptibleChannel::isOpen)
> .collect(Collectors.toList()).size());
> init();
> };
> scheduledThreadPool.scheduleWithFixedDelay(command, 1L, 2000, 
> TimeUnit.MILLISECONDS);
> }
> private static void init() {
> Properties streamsConfiguration = new Properties();
> streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, 
> "JeroenApp");
> streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
> brokerList);
> StreamsConfig config = 

[GitHub] kafka pull request #3209: KAFKA-5345; Close KafkaClient when streams client ...

2017-06-02 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/3209


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >