Jenkins build is back to normal : kafka-trunk-jdk7 #1837

2017-01-13 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1836

2017-01-13 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-3853; Extend OffsetFetch API to allow fetching all offsets for a

--
[...truncated 18591 lines...]
org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldThroughOnUnassignedStateStoreAccess STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldThroughOnUnassignedStateStoreAccess PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddMoreThanOnePatternSourceNode STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddMoreThanOnePatternSourceNode PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldSetCorrectSourceNodesWithRegexUpdatedTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldSetCorrectSourceNodesWithRegexUpdatedTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testPatternSourceTopic 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testPatternSourceTopic 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsExternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #1179

2017-01-13 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-4591) Create Topic Policy

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822671#comment-15822671
 ] 

ASF GitHub Bot commented on KAFKA-4591:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2361


> Create Topic Policy
> ---
>
> Key: KAFKA-4591
> URL: https://issues.apache.org/jira/browse/KAFKA-4591
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> It would be useful to be able to validate create topics requests. More 
> details in the KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-108%3A+Create+Topic+Policy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4591) Create Topic Policy

2017-01-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4591.

   Resolution: Fixed
Fix Version/s: 0.10.2.0

Issue resolved by pull request 2361
[https://github.com/apache/kafka/pull/2361]

> Create Topic Policy
> ---
>
> Key: KAFKA-4591
> URL: https://issues.apache.org/jira/browse/KAFKA-4591
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> It would be useful to be able to validate create topics requests. More 
> details in the KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-108%3A+Create+Topic+Policy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2361: KAFKA-4591: Create Topic Policy (KIP-108)

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2361


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3857) Additional log cleaner metrics

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822647#comment-15822647
 ] 

ASF GitHub Bot commented on KAFKA-3857:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2378


> Additional log cleaner metrics
> --
>
> Key: KAFKA-3857
> URL: https://issues.apache.org/jira/browse/KAFKA-3857
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Kiran Pillarisetty
> Fix For: 0.10.2.0
>
>
> The proposal would be to add a couple of additional log cleaner metrics: 
> 1. Time of last log cleaner run 
> 2. Cumulative number of successful log cleaner runs since last broker restart.
> Existing log cleaner metrics (max-buffer-utilization-percent, 
> cleaner-recopy-percent, max-clean-time-secs, max-dirty-percent) do not 
> differentiate an idle log cleaner from a dead log cleaner. It would be useful 
> to have the above two metrics added, to indicate whether log cleaner is alive 
> (and successfully cleaning) or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2378: KAFKA-3857 Additional log cleaner metrics

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2378


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3857) Additional log cleaner metrics

2017-01-13 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3857:
---
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2378
[https://github.com/apache/kafka/pull/2378]

> Additional log cleaner metrics
> --
>
> Key: KAFKA-3857
> URL: https://issues.apache.org/jira/browse/KAFKA-3857
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Kiran Pillarisetty
> Fix For: 0.10.2.0
>
>
> The proposal would be to add a couple of additional log cleaner metrics: 
> 1. Time of last log cleaner run 
> 2. Cumulative number of successful log cleaner runs since last broker restart.
> Existing log cleaner metrics (max-buffer-utilization-percent, 
> cleaner-recopy-percent, max-clean-time-secs, max-dirty-percent) do not 
> differentiate an idle log cleaner from a dead log cleaner. It would be useful 
> to have the above two metrics added, to indicate whether log cleaner is alive 
> (and successfully cleaning) or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2074: KAFKA-3853 (KIP-88): Report offsets for empty grou...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2074


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3853) Report offsets for empty groups in ConsumerGroupCommand

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822596#comment-15822596
 ] 

ASF GitHub Bot commented on KAFKA-3853:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2074


> Report offsets for empty groups in ConsumerGroupCommand
> ---
>
> Key: KAFKA-3853
> URL: https://issues.apache.org/jira/browse/KAFKA-3853
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, tools
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> We ought to be able to display offsets for groups which either have no active 
> members or which are not using group management. The owner column can be left 
> empty or set to "N/A". If a group is active, I'm not sure it would make sense 
> to report all offsets, in particular when partitions are unassigned, but if 
> it seems problematic to do so, we could enable the behavior with a flag (e.g. 
> --include-unassigned).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3853) Report offsets for empty groups in ConsumerGroupCommand

2017-01-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3853:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2074
[https://github.com/apache/kafka/pull/2074]

> Report offsets for empty groups in ConsumerGroupCommand
> ---
>
> Key: KAFKA-3853
> URL: https://issues.apache.org/jira/browse/KAFKA-3853
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, tools
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> We ought to be able to display offsets for groups which either have no active 
> members or which are not using group management. The owner column can be left 
> empty or set to "N/A". If a group is active, I'm not sure it would make sense 
> to report all offsets, in particular when partitions are unassigned, but if 
> it seems problematic to do so, we could enable the behavior with a flag (e.g. 
> --include-unassigned).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4633) Always use regex pattern subscription to avoid auto create topics

2017-01-13 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4633:


 Summary: Always use regex pattern subscription to avoid auto 
create topics
 Key: KAFKA-4633
 URL: https://issues.apache.org/jira/browse/KAFKA-4633
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang


In {{KafkaConsumer}}, a metadata update is requested whenever 
{{subscribe(List topics ..)}} is called. And when such a metadata 
request is sent to the broker upon the first {{poll}} call, it will cause the 
broker to auto-create any topics that do not exist if the broker-side config 
{{topic.auto.create}} is turned on.

In order to work around this issue until the config is default to false and 
gradually be deprecated, we will let Streams to always use the other 
{{subscribe}} function with regex pattern, which will send the metadata request 
with empty topic list and hence won't trigger broker-side auto topic creation.

The side-effect is that the metadata response will be larger, since it contains 
all the topic infos; but since we only refresh it infrequently this will add 
negligible overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4547) Consumer.position returns incorrect results for Kafka 0.10.1.0 client

2017-01-13 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4547:
---
Status: Patch Available  (was: In Progress)

> Consumer.position returns incorrect results for Kafka 0.10.1.0 client
> -
>
> Key: KAFKA-4547
> URL: https://issues.apache.org/jira/browse/KAFKA-4547
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
> Environment: Windows Kafka 0.10.1.0
>Reporter: Pranav Nakhe
>Assignee: Vahid Hashemian
>  Labels: clients
> Fix For: 0.10.2.0
>
> Attachments: issuerep.zip
>
>
> Consider the following code -
>   KafkaConsumer consumer = new 
> KafkaConsumer(props);
>   List listOfPartitions = new ArrayList();
>   for (int i = 0; i < 
> consumer.partitionsFor("IssueTopic").size(); i++) {
>   listOfPartitions.add(new TopicPartition("IssueTopic", 
> i));
>   }
>   consumer.assign(listOfPartitions);  
>   consumer.pause(listOfPartitions);
>   consumer.seekToEnd(listOfPartitions);
> //consumer.resume(listOfPartitions); -- commented out
>   for(int i = 0; i < listOfPartitions.size(); i++) {
>   
> System.out.println(consumer.position(listOfPartitions.get(i)));
>   }
>   
> I have created a topic IssueTopic with 3 partitions with a single replica on 
> my single node kafka installation (0.10.1.0)
> The behavior noticed for Kafka client 0.10.1.0 as against Kafka client 
> 0.10.0.1
> A) Initially when there are no messages on IssueTopic running the above 
> program returns
> 0.10.1.0   
> 0  
> 0  
> 0   
> 0.10.0.1
> 0
> 0
> 0
> B) Next I send 6 messages and see that the messages have been evenly 
> distributed across the three partitions. Running the above program now 
> returns 
> 0.10.1.0   
> 0  
> 0  
> 2  
> 0.10.0.1
> 2
> 2
> 2
> Clearly there is a difference in behavior for the 2 clients.
> Now after seekToEnd call if I make a call to resume (uncomment the resume 
> call in code above) then the behavior is
> 0.10.1.0   
> 2  
> 2  
> 2  
> 0.10.0.1
> 2
> 2
> 2
> This is an issue I came across when using the spark kafka integration for 
> 0.10. When I use kafka 0.10.1.0 I started seeing this issue. I had raised a 
> pull request to resolve that issue [SPARK-18779] but when looking at the 
> kafka client implementation/documentation now it seems the issue is with 
> kafka and not with spark. There does not seem to be any documentation which 
> specifies/implies that we need to call resume after seekToEnd for position to 
> return the correct value. Also there is a clear difference in the behavior in 
> the two kafka client implementations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3857) Additional log cleaner metrics

2017-01-13 Thread Kiran Pillarisetty (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822579#comment-15822579
 ] 

Kiran Pillarisetty commented on KAFKA-3857:
---

I just created a new branch based off of the trunk, applied my changes there 
and created a new PR. 

[~junrao], [~ijuma] Could you please take a look?  Would it be possible to 
include it in 0.10.2.0? (I believe Feature Freeze date is today)
https://github.com/apache/kafka/pull/2378


> Additional log cleaner metrics
> --
>
> Key: KAFKA-3857
> URL: https://issues.apache.org/jira/browse/KAFKA-3857
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Kiran Pillarisetty
>
> The proposal would be to add a couple of additional log cleaner metrics: 
> 1. Time of last log cleaner run 
> 2. Cumulative number of successful log cleaner runs since last broker restart.
> Existing log cleaner metrics (max-buffer-utilization-percent, 
> cleaner-recopy-percent, max-clean-time-secs, max-dirty-percent) do not 
> differentiate an idle log cleaner from a dead log cleaner. It would be useful 
> to have the above two metrics added, to indicate whether log cleaner is alive 
> (and successfully cleaning) or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3857) Additional log cleaner metrics

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822576#comment-15822576
 ] 

ASF GitHub Bot commented on KAFKA-3857:
---

Github user kiranptivo closed the pull request at:

https://github.com/apache/kafka/pull/1593


> Additional log cleaner metrics
> --
>
> Key: KAFKA-3857
> URL: https://issues.apache.org/jira/browse/KAFKA-3857
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Kiran Pillarisetty
>
> The proposal would be to add a couple of additional log cleaner metrics: 
> 1. Time of last log cleaner run 
> 2. Cumulative number of successful log cleaner runs since last broker restart.
> Existing log cleaner metrics (max-buffer-utilization-percent, 
> cleaner-recopy-percent, max-clean-time-secs, max-dirty-percent) do not 
> differentiate an idle log cleaner from a dead log cleaner. It would be useful 
> to have the above two metrics added, to indicate whether log cleaner is alive 
> (and successfully cleaning) or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1593: KAFKA-3857 Additional log cleaner metrics

2017-01-13 Thread kiranptivo
Github user kiranptivo closed the pull request at:

https://github.com/apache/kafka/pull/1593


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3857) Additional log cleaner metrics

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822571#comment-15822571
 ] 

ASF GitHub Bot commented on KAFKA-3857:
---

GitHub user kiranptivo opened a pull request:

https://github.com/apache/kafka/pull/2378

KAFKA-3857 Additional log cleaner metrics

Fixes KAFKA-3857

Changes proposed in this pull request:

An additional log cleaner metric has been added:
time-since-last-run-ms: Time since the last log cleaner run, in 
milliseconds.  This metric would be reset to 0 every time log cleaner thread 
runs. If this metric keeps constantly increasing, it indicates that the log 
cleaner thread is not alive.

If you are creating alerts around log cleaner, you could monitor this 
metric. A high "time-since-last-run-ms" value (eg: 60) indicates that the 
log cleaner hasn't been running since the last 10 minutes.

The code has been tested. JMX metric has been verified.

Note: This pull request is a continuation of the following pull request.  
PR#1593 was quite old and I had some trouble rebasing it. Decided to start a 
fresh PR.


https://github.com/apache/kafka/pull/1593/files/927b28cf41275874945beb7377f7f36c462f27c8#diff-ca1c127eee4b3c748ae73028f6abeab8

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kiranptivo/kafka log_cleaner_jmx_metric

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2378.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2378


commit a8635ff4a13e66b3f142ad97fff0ab082ecaf466
Author: Kiran Pillarisetty 
Date:   2017-01-14T00:23:45Z

Added a new metric time-since-last-run-ms, to track the time since the last 
log cleaner run, in milli seconds




> Additional log cleaner metrics
> --
>
> Key: KAFKA-3857
> URL: https://issues.apache.org/jira/browse/KAFKA-3857
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Kiran Pillarisetty
>
> The proposal would be to add a couple of additional log cleaner metrics: 
> 1. Time of last log cleaner run 
> 2. Cumulative number of successful log cleaner runs since last broker restart.
> Existing log cleaner metrics (max-buffer-utilization-percent, 
> cleaner-recopy-percent, max-clean-time-secs, max-dirty-percent) do not 
> differentiate an idle log cleaner from a dead log cleaner. It would be useful 
> to have the above two metrics added, to indicate whether log cleaner is alive 
> (and successfully cleaning) or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2378: KAFKA-3857 Additional log cleaner metrics

2017-01-13 Thread kiranptivo
GitHub user kiranptivo opened a pull request:

https://github.com/apache/kafka/pull/2378

KAFKA-3857 Additional log cleaner metrics

Fixes KAFKA-3857

Changes proposed in this pull request:

An additional log cleaner metric has been added:
time-since-last-run-ms: Time since the last log cleaner run, in 
milliseconds.  This metric would be reset to 0 every time log cleaner thread 
runs. If this metric keeps constantly increasing, it indicates that the log 
cleaner thread is not alive.

If you are creating alerts around log cleaner, you could monitor this 
metric. A high "time-since-last-run-ms" value (eg: 60) indicates that the 
log cleaner hasn't been running since the last 10 minutes.

The code has been tested. JMX metric has been verified.

Note: This pull request is a continuation of the following pull request.  
PR#1593 was quite old and I had some trouble rebasing it. Decided to start a 
fresh PR.


https://github.com/apache/kafka/pull/1593/files/927b28cf41275874945beb7377f7f36c462f27c8#diff-ca1c127eee4b3c748ae73028f6abeab8

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kiranptivo/kafka log_cleaner_jmx_metric

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2378.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2378


commit a8635ff4a13e66b3f142ad97fff0ab082ecaf466
Author: Kiran Pillarisetty 
Date:   2017-01-14T00:23:45Z

Added a new metric time-since-last-run-ms, to track the time since the last 
log cleaner run, in milli seconds




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4490) Add Global Table support to Kafka Streams

2017-01-13 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4490:
-
Resolution: Fixed
  Reviewer: Guozhang Wang
Status: Resolved  (was: Patch Available)

> Add Global Table support to Kafka Streams
> -
>
> Key: KAFKA-4490
> URL: https://issues.apache.org/jira/browse/KAFKA-4490
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> As per KIP-99 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-99%3A+Add+Global+Tables+to+Kafka+Streams
> Add support for Global Tables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4632) Kafka Connect WorkerSinkTask.closePartitions doesn't handle WakeupException

2017-01-13 Thread Scott Reynolds (JIRA)
Scott Reynolds created KAFKA-4632:
-

 Summary: Kafka Connect WorkerSinkTask.closePartitions doesn't 
handle WakeupException
 Key: KAFKA-4632
 URL: https://issues.apache.org/jira/browse/KAFKA-4632
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.1, 0.10.0.0, 0.10.1.0
Reporter: Scott Reynolds


WorkerSinkTask's closePartitions method isn't handling WakeupException that can 
be thrown from commitSync.

{code}
org.apache.kafka.common.errors.WakeupException
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup
 (ConsumerNetworkClient.java:404)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll 
(ConsumerNetworkClient.java:245)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll 
(ConsumerNetworkClient.java:180)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync
 (ConsumerCoordinator.java:499)
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync 
(KafkaConsumer.java:1104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.doCommitSync 
(WorkerSinkTask.java:245)
at org.apache.kafka.connect.runtime.WorkerSinkTask.doCommit 
(WorkerSinkTask.java:264)
at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets 
(WorkerSinkTask.java:305)
at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions 
(WorkerSinkTask.java:435)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute 
(WorkerSinkTask.java:147)
at org.apache.kafka.connect.runtime.WorkerTask.doRun (WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run (WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511)
at java.util.concurrent.FutureTask.run (FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker 
(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run 
(ThreadPoolExecutor.java:617)
at java.lang.Thread.run (Thread.java:745)
{code}

I believe it should catch it and ignore it as that is what the poll method does 
when isStopping is true

{code:java}
} catch (WakeupException we) {
log.trace("{} consumer woken up", id);

if (isStopping())
return;

if (shouldPause()) {
pauseAll();
} else if (!pausedForRedelivery) {
resumeAll();
}
}
{code}

But unsure, love some insight into this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4060) Remove ZkClient dependency in Kafka Streams

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822503#comment-15822503
 ] 

ASF GitHub Bot commented on KAFKA-4060:
---

GitHub user hjafarpour opened a pull request:

https://github.com/apache/kafka/pull/2377

Kafka 4060 docs update

Updated the docs with changes in KAFKA-4060.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hjafarpour/kafka KAFKA-4060-docs-update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2377.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2377


commit db0405c6994123417441ac328d2f9e7f96b8e851
Author: Hojjat Jafarpour 
Date:   2016-09-20T00:10:54Z

Removed Zookeeper dependency from Kafka Streams. Added two test for 
creating and deleting topics. They work in IDE but fail while build. Removing 
the new tests for now.

commit 69b8baf766a0ed70de8782938cf6157fa6b01794
Author: Hojjat Jafarpour 
Date:   2016-09-20T17:52:53Z

Made changes according to the review feedback.

commit f8952a287a7fec0d3a51db36c003a3d599491309
Author: Hojjat Jafarpour 
Date:   2016-09-22T16:19:07Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit cb6891fe839cf6ca7ba46ca42437344764cf009e
Author: Hojjat Jafarpour 
Date:   2016-09-22T17:13:01Z

Made more changes based on review feedback.

commit 393b909158aadada58a796402e8b6f03c0077ef9
Author: Hojjat Jafarpour 
Date:   2016-09-23T17:34:46Z

Applied review feedback.

commit c19f0df4632eb2b2b9529aea0be3ed9173002b63
Author: Hojjat Jafarpour 
Date:   2016-09-28T19:35:17Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 96264d47643b04d53293b86f3ffd718fbb688609
Author: Hojjat Jafarpour 
Date:   2016-09-28T22:15:06Z

Made changes based on Guozhang's comments.

commit 4f7bfd6cc4147803d90168793f498f8a1b81f18b
Author: Hojjat Jafarpour 
Date:   2016-09-29T22:08:39Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 5e386f0fe3c109dfea95b392569adbe8f32018a0
Author: Hojjat Jafarpour 
Date:   2016-09-30T00:22:15Z

Disabled auto topic generation for  EmbeddedKafkaCluster. Removed polling 
in StreamPartitionAssignor.

commit eb3ec2947ad9799ed35048faafd4bf35106f880a
Author: Hojjat Jafarpour 
Date:   2016-09-30T18:49:42Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit f44a9853cbd38f9c10a0b15bc2297d8a242fce3f
Author: Hojjat Jafarpour 
Date:   2016-09-30T20:22:04Z

Now deleting and re creating existing topic with different number of 
partitions.

commit 3d9dba21f0c39104f5e807a50f3fdb5956eda1bf
Author: Hojjat Jafarpour 
Date:   2016-09-30T22:09:51Z

Fixed some issues in tests.

commit 5b0be7aa1a2b1d26ae453aab78fb4989c9145411
Author: Hojjat Jafarpour 
Date:   2016-10-07T18:41:42Z

Sending topic management request in batch.

commit 8e064f1863aa8095cfae3f11a6f85513c498e479
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:36:12Z

added the missing class.

commit 74ce5b9f9e4b78dee346e77ac52bc053b9cc36f6
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:45:43Z

Merge branch 'KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new' 
of https://github.com/hjafarpour/kafka into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit c986b539ad26a2c124da60984d1cac4c2aab97dc
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:59:21Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 37079dd28a15a6d36632631227c9d718ca451745
Author: Hojjat Jafarpour 
Date:   2016-10-10T03:09:33Z

Fixed the issue with tests.

commit 21b6907d457591ba0a242e548549d315b38e69f2
Author: Hojjat Jafarpour 
Date:   2016-10-24T16:46:49Z

Made minor changes based on Guozhang's feedback.

commit 3ae6583576b25f8fce4a9f4fac3f4a33bd8f47b6
Author: Hojjat Jafarpour 
Date:   2016-10-24T16:49:23Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 

[GitHub] kafka pull request #2377: Kafka 4060 docs update

2017-01-13 Thread hjafarpour
GitHub user hjafarpour opened a pull request:

https://github.com/apache/kafka/pull/2377

Kafka 4060 docs update

Updated the docs with changes in KAFKA-4060.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hjafarpour/kafka KAFKA-4060-docs-update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2377.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2377


commit db0405c6994123417441ac328d2f9e7f96b8e851
Author: Hojjat Jafarpour 
Date:   2016-09-20T00:10:54Z

Removed Zookeeper dependency from Kafka Streams. Added two test for 
creating and deleting topics. They work in IDE but fail while build. Removing 
the new tests for now.

commit 69b8baf766a0ed70de8782938cf6157fa6b01794
Author: Hojjat Jafarpour 
Date:   2016-09-20T17:52:53Z

Made changes according to the review feedback.

commit f8952a287a7fec0d3a51db36c003a3d599491309
Author: Hojjat Jafarpour 
Date:   2016-09-22T16:19:07Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit cb6891fe839cf6ca7ba46ca42437344764cf009e
Author: Hojjat Jafarpour 
Date:   2016-09-22T17:13:01Z

Made more changes based on review feedback.

commit 393b909158aadada58a796402e8b6f03c0077ef9
Author: Hojjat Jafarpour 
Date:   2016-09-23T17:34:46Z

Applied review feedback.

commit c19f0df4632eb2b2b9529aea0be3ed9173002b63
Author: Hojjat Jafarpour 
Date:   2016-09-28T19:35:17Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 96264d47643b04d53293b86f3ffd718fbb688609
Author: Hojjat Jafarpour 
Date:   2016-09-28T22:15:06Z

Made changes based on Guozhang's comments.

commit 4f7bfd6cc4147803d90168793f498f8a1b81f18b
Author: Hojjat Jafarpour 
Date:   2016-09-29T22:08:39Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 5e386f0fe3c109dfea95b392569adbe8f32018a0
Author: Hojjat Jafarpour 
Date:   2016-09-30T00:22:15Z

Disabled auto topic generation for  EmbeddedKafkaCluster. Removed polling 
in StreamPartitionAssignor.

commit eb3ec2947ad9799ed35048faafd4bf35106f880a
Author: Hojjat Jafarpour 
Date:   2016-09-30T18:49:42Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit f44a9853cbd38f9c10a0b15bc2297d8a242fce3f
Author: Hojjat Jafarpour 
Date:   2016-09-30T20:22:04Z

Now deleting and re creating existing topic with different number of 
partitions.

commit 3d9dba21f0c39104f5e807a50f3fdb5956eda1bf
Author: Hojjat Jafarpour 
Date:   2016-09-30T22:09:51Z

Fixed some issues in tests.

commit 5b0be7aa1a2b1d26ae453aab78fb4989c9145411
Author: Hojjat Jafarpour 
Date:   2016-10-07T18:41:42Z

Sending topic management request in batch.

commit 8e064f1863aa8095cfae3f11a6f85513c498e479
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:36:12Z

added the missing class.

commit 74ce5b9f9e4b78dee346e77ac52bc053b9cc36f6
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:45:43Z

Merge branch 'KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new' 
of https://github.com/hjafarpour/kafka into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit c986b539ad26a2c124da60984d1cac4c2aab97dc
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:59:21Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 37079dd28a15a6d36632631227c9d718ca451745
Author: Hojjat Jafarpour 
Date:   2016-10-10T03:09:33Z

Fixed the issue with tests.

commit 21b6907d457591ba0a242e548549d315b38e69f2
Author: Hojjat Jafarpour 
Date:   2016-10-24T16:46:49Z

Made minor changes based on Guozhang's feedback.

commit 3ae6583576b25f8fce4a9f4fac3f4a33bd8f47b6
Author: Hojjat Jafarpour 
Date:   2016-10-24T16:49:23Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 253784edc252d80d5ecb3eda8ef506657f659ba1
Author: Hojjat Jafarpour 
Date:   2016-10-24T20:35:48Z

Minor change to ignore unnecessary error handling.

commit 3599534a55347aad55957ced02e0901082416e5f
Author: Hojjat 

Build failed in Jenkins: kafka-trunk-jdk7 #1835

2017-01-13 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4565; Separation of Internal and External traffic (KIP-103)

[me] MINOR: Cluster file generator should produce valid json

[ismael] MINOR: Remove unneeded client used API lists

[jason] KAFKA-4627; Fix timing issue in consumer close tests

[wangguoz] HOTFIX: Added another broker to smoke test

[wangguoz] KAFKA-3739: Add no-arg constructor for WindowedSerdes in Streams

[wangguoz] MINOR: Remove unnecessary store info from TopologyBuilder

[me] KAFKA-4381: Add per partition lag metrics to the consumer

--
[...truncated 18848 lines...]
org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion STARTED

org.apache.kafka.streams.processor.internals.assignment.AssignmentInfoTest > 
shouldDecodePreviousVersion PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCleanUpTaskStateDirectoriesThatAreNotCurrentlyLocked STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCleanUpTaskStateDirectoriesThatAreNotCurrentlyLocked PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCreateBaseDirectory STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCreateBaseDirectory PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldLockMulitpleTaskDirectories STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldLockMulitpleTaskDirectories PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCreateDirectoriesIfParentDoesntExist STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCreateDirectoriesIfParentDoesntExist PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldUnlockGlobalStateDirectory STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldUnlockGlobalStateDirectory PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldListAllTaskDirectories STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldListAllTaskDirectories PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldLockTaskStateDirectory STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldLockTaskStateDirectory PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldLockGlobalStateDirectory STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldLockGlobalStateDirectory PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCreateTaskStateDirectory STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldCreateTaskStateDirectory PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldNotRemoveNonTaskDirectoriesAndFiles STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldNotRemoveNonTaskDirectoriesAndFiles PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldBeTrueIfAlreadyHoldsLock STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldBeTrueIfAlreadyHoldsLock PASSED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldReleaseTaskStateDirectoryLock STARTED

org.apache.kafka.streams.processor.internals.StateDirectoryTest > 
shouldReleaseTaskStateDirectoryLock PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowStreamsExceptionIfNoPartitionsFoundForStore STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 

[jira] [Commented] (KAFKA-4467) Run tests on travis-ci using docker

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822487#comment-15822487
 ] 

ASF GitHub Bot commented on KAFKA-4467:
---

GitHub user raghavgautam opened a pull request:

https://github.com/apache/kafka/pull/2376

KAFKA-4467: Run tests on travis-ci using docker

@ijuma @ewencp @cmccabe @harshach Please review.
Here is a sample run:
https://travis-ci.org/raghavgautam/kafka/builds/191714520

In this run 214 tests were run and 144 tests passed.

I will open separate jiras for fixing failures.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/raghavgautam/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2376.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2376


commit ca4bce69cf52c4a4cc06f16c25bf6f932bd5fb93
Author: Raghav Kumar Gautam 
Date:   2017-01-10T01:32:36Z

KAFKA-4467: Run tests on travis-ci using docker




> Run tests on travis-ci using docker
> ---
>
> Key: KAFKA-4467
> URL: https://issues.apache.org/jira/browse/KAFKA-4467
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Raghav Kumar Gautam
>Assignee: Raghav Kumar Gautam
> Fix For: 0.10.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2376: KAFKA-4467: Run tests on travis-ci using docker

2017-01-13 Thread raghavgautam
GitHub user raghavgautam opened a pull request:

https://github.com/apache/kafka/pull/2376

KAFKA-4467: Run tests on travis-ci using docker

@ijuma @ewencp @cmccabe @harshach Please review.
Here is a sample run:
https://travis-ci.org/raghavgautam/kafka/builds/191714520

In this run 214 tests were run and 144 tests passed.

I will open separate jiras for fixing failures.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/raghavgautam/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2376.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2376


commit ca4bce69cf52c4a4cc06f16c25bf6f932bd5fb93
Author: Raghav Kumar Gautam 
Date:   2017-01-10T01:32:36Z

KAFKA-4467: Run tests on travis-ci using docker




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-4467) Run tests on travis-ci using docker

2017-01-13 Thread Raghav Kumar Gautam (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raghav Kumar Gautam reassigned KAFKA-4467:
--

Assignee: Raghav Kumar Gautam

> Run tests on travis-ci using docker
> ---
>
> Key: KAFKA-4467
> URL: https://issues.apache.org/jira/browse/KAFKA-4467
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Raghav Kumar Gautam
>Assignee: Raghav Kumar Gautam
> Fix For: 0.10.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #1178

2017-01-13 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Cluster file generator should produce valid json

[ismael] MINOR: Remove unneeded client used API lists

[jason] KAFKA-4627; Fix timing issue in consumer close tests

[wangguoz] HOTFIX: Added another broker to smoke test

[wangguoz] KAFKA-3739: Add no-arg constructor for WindowedSerdes in Streams

[wangguoz] MINOR: Remove unnecessary store info from TopologyBuilder

[me] KAFKA-4381: Add per partition lag metrics to the consumer

--
[...truncated 35394 lines...]

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:668)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:349)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED


[jira] [Updated] (KAFKA-4631) Refresh consumer metadata more frequently for unknown subscribed topics

2017-01-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4631:
---
Description: By default, the consumer refreshes metadata every 5 minutes. 
In testing, it can often happen that a topic is created at about the same time 
that the consumer is started. In the worst case, creation finishes after the 
consumer fetches metadata, and the test must wait 5 minutes for the consumer to 
refresh metadata in order to discover the topic. To address this problem, users 
can decrease the metadata refresh interval, but this means more frequent 
refreshes even after all topics are known. An improvement would be to 
internally let the consumer fetch metadata more frequently when the consumer 
encounters unknown topics. Perhaps every 5-10 seconds would be reasonable, for 
example.  (was: By default, the consumer refreshes metadata every 5 minutes. In 
testing, it can often happen that a topic is created at about the same time 
that the consumer is started. In the worst case, it finishes after the consumer 
fetches metadata, and the test must wait 5 minutes for the consumer to refresh 
metadata. To address this problem, users can decrease the metadata refresh 
interval, but this means more frequent refreshes even after all topics are 
known. An improvement would be to internally let the consumer fetch metadata 
more frequently when the consumer encounters unknown topics. Perhaps every 5-10 
seconds would be reasonable, for example.)

> Refresh consumer metadata more frequently for unknown subscribed topics
> ---
>
> Key: KAFKA-4631
> URL: https://issues.apache.org/jira/browse/KAFKA-4631
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> By default, the consumer refreshes metadata every 5 minutes. In testing, it 
> can often happen that a topic is created at about the same time that the 
> consumer is started. In the worst case, creation finishes after the consumer 
> fetches metadata, and the test must wait 5 minutes for the consumer to 
> refresh metadata in order to discover the topic. To address this problem, 
> users can decrease the metadata refresh interval, but this means more 
> frequent refreshes even after all topics are known. An improvement would be 
> to internally let the consumer fetch metadata more frequently when the 
> consumer encounters unknown topics. Perhaps every 5-10 seconds would be 
> reasonable, for example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4631) Refresh consumer metadata more frequently for unknown subscribed topics

2017-01-13 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4631:
--

 Summary: Refresh consumer metadata more frequently for unknown 
subscribed topics
 Key: KAFKA-4631
 URL: https://issues.apache.org/jira/browse/KAFKA-4631
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Matthias J. Sax
 Fix For: 0.10.2.0


By default, the consumer refreshes metadata every 5 minutes. In testing, it can 
often happen that a topic is created at about the same time that the consumer 
is started. In the worst case, it finishes after the consumer fetches metadata, 
and the test must wait 5 minutes for the consumer to refresh metadata. To 
address this problem, users can decrease the metadata refresh interval, but 
this means more frequent refreshes even after all topics are known. An 
improvement would be to internally let the consumer fetch metadata more 
frequently when the consumer encounters unknown topics. Perhaps every 5-10 
seconds would be reasonable, for example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4060) Remove ZkClient dependency in Kafka Streams

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822448#comment-15822448
 ] 

ASF GitHub Bot commented on KAFKA-4060:
---

GitHub user hjafarpour opened a pull request:

https://github.com/apache/kafka/pull/2375

Kafka 4060 remove zk client dependency in kafka streams followup

Updated the KAFKA-4060 code based on the new Admin client API. 
Also we won't delete internal topics anymore.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hjafarpour/kafka 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-followup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2375.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2375


commit db0405c6994123417441ac328d2f9e7f96b8e851
Author: Hojjat Jafarpour 
Date:   2016-09-20T00:10:54Z

Removed Zookeeper dependency from Kafka Streams. Added two test for 
creating and deleting topics. They work in IDE but fail while build. Removing 
the new tests for now.

commit 69b8baf766a0ed70de8782938cf6157fa6b01794
Author: Hojjat Jafarpour 
Date:   2016-09-20T17:52:53Z

Made changes according to the review feedback.

commit f8952a287a7fec0d3a51db36c003a3d599491309
Author: Hojjat Jafarpour 
Date:   2016-09-22T16:19:07Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit cb6891fe839cf6ca7ba46ca42437344764cf009e
Author: Hojjat Jafarpour 
Date:   2016-09-22T17:13:01Z

Made more changes based on review feedback.

commit 393b909158aadada58a796402e8b6f03c0077ef9
Author: Hojjat Jafarpour 
Date:   2016-09-23T17:34:46Z

Applied review feedback.

commit c19f0df4632eb2b2b9529aea0be3ed9173002b63
Author: Hojjat Jafarpour 
Date:   2016-09-28T19:35:17Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 96264d47643b04d53293b86f3ffd718fbb688609
Author: Hojjat Jafarpour 
Date:   2016-09-28T22:15:06Z

Made changes based on Guozhang's comments.

commit 4f7bfd6cc4147803d90168793f498f8a1b81f18b
Author: Hojjat Jafarpour 
Date:   2016-09-29T22:08:39Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 5e386f0fe3c109dfea95b392569adbe8f32018a0
Author: Hojjat Jafarpour 
Date:   2016-09-30T00:22:15Z

Disabled auto topic generation for  EmbeddedKafkaCluster. Removed polling 
in StreamPartitionAssignor.

commit eb3ec2947ad9799ed35048faafd4bf35106f880a
Author: Hojjat Jafarpour 
Date:   2016-09-30T18:49:42Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit f44a9853cbd38f9c10a0b15bc2297d8a242fce3f
Author: Hojjat Jafarpour 
Date:   2016-09-30T20:22:04Z

Now deleting and re creating existing topic with different number of 
partitions.

commit 3d9dba21f0c39104f5e807a50f3fdb5956eda1bf
Author: Hojjat Jafarpour 
Date:   2016-09-30T22:09:51Z

Fixed some issues in tests.

commit 5b0be7aa1a2b1d26ae453aab78fb4989c9145411
Author: Hojjat Jafarpour 
Date:   2016-10-07T18:41:42Z

Sending topic management request in batch.

commit 8e064f1863aa8095cfae3f11a6f85513c498e479
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:36:12Z

added the missing class.

commit 74ce5b9f9e4b78dee346e77ac52bc053b9cc36f6
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:45:43Z

Merge branch 'KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new' 
of https://github.com/hjafarpour/kafka into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit c986b539ad26a2c124da60984d1cac4c2aab97dc
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:59:21Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 37079dd28a15a6d36632631227c9d718ca451745
Author: Hojjat Jafarpour 
Date:   2016-10-10T03:09:33Z

Fixed the issue with tests.

commit 21b6907d457591ba0a242e548549d315b38e69f2
Author: Hojjat Jafarpour 
Date:   2016-10-24T16:46:49Z

Made minor changes based on Guozhang's feedback.

commit 3ae6583576b25f8fce4a9f4fac3f4a33bd8f47b6
Author: Hojjat Jafarpour 

[GitHub] kafka pull request #2375: Kafka 4060 remove zk client dependency in kafka st...

2017-01-13 Thread hjafarpour
GitHub user hjafarpour opened a pull request:

https://github.com/apache/kafka/pull/2375

Kafka 4060 remove zk client dependency in kafka streams followup

Updated the KAFKA-4060 code based on the new Admin client API. 
Also we won't delete internal topics anymore.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hjafarpour/kafka 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-followup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2375.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2375


commit db0405c6994123417441ac328d2f9e7f96b8e851
Author: Hojjat Jafarpour 
Date:   2016-09-20T00:10:54Z

Removed Zookeeper dependency from Kafka Streams. Added two test for 
creating and deleting topics. They work in IDE but fail while build. Removing 
the new tests for now.

commit 69b8baf766a0ed70de8782938cf6157fa6b01794
Author: Hojjat Jafarpour 
Date:   2016-09-20T17:52:53Z

Made changes according to the review feedback.

commit f8952a287a7fec0d3a51db36c003a3d599491309
Author: Hojjat Jafarpour 
Date:   2016-09-22T16:19:07Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit cb6891fe839cf6ca7ba46ca42437344764cf009e
Author: Hojjat Jafarpour 
Date:   2016-09-22T17:13:01Z

Made more changes based on review feedback.

commit 393b909158aadada58a796402e8b6f03c0077ef9
Author: Hojjat Jafarpour 
Date:   2016-09-23T17:34:46Z

Applied review feedback.

commit c19f0df4632eb2b2b9529aea0be3ed9173002b63
Author: Hojjat Jafarpour 
Date:   2016-09-28T19:35:17Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 96264d47643b04d53293b86f3ffd718fbb688609
Author: Hojjat Jafarpour 
Date:   2016-09-28T22:15:06Z

Made changes based on Guozhang's comments.

commit 4f7bfd6cc4147803d90168793f498f8a1b81f18b
Author: Hojjat Jafarpour 
Date:   2016-09-29T22:08:39Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 5e386f0fe3c109dfea95b392569adbe8f32018a0
Author: Hojjat Jafarpour 
Date:   2016-09-30T00:22:15Z

Disabled auto topic generation for  EmbeddedKafkaCluster. Removed polling 
in StreamPartitionAssignor.

commit eb3ec2947ad9799ed35048faafd4bf35106f880a
Author: Hojjat Jafarpour 
Date:   2016-09-30T18:49:42Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit f44a9853cbd38f9c10a0b15bc2297d8a242fce3f
Author: Hojjat Jafarpour 
Date:   2016-09-30T20:22:04Z

Now deleting and re creating existing topic with different number of 
partitions.

commit 3d9dba21f0c39104f5e807a50f3fdb5956eda1bf
Author: Hojjat Jafarpour 
Date:   2016-09-30T22:09:51Z

Fixed some issues in tests.

commit 5b0be7aa1a2b1d26ae453aab78fb4989c9145411
Author: Hojjat Jafarpour 
Date:   2016-10-07T18:41:42Z

Sending topic management request in batch.

commit 8e064f1863aa8095cfae3f11a6f85513c498e479
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:36:12Z

added the missing class.

commit 74ce5b9f9e4b78dee346e77ac52bc053b9cc36f6
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:45:43Z

Merge branch 'KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new' 
of https://github.com/hjafarpour/kafka into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit c986b539ad26a2c124da60984d1cac4c2aab97dc
Author: Hojjat Jafarpour 
Date:   2016-10-07T21:59:21Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 37079dd28a15a6d36632631227c9d718ca451745
Author: Hojjat Jafarpour 
Date:   2016-10-10T03:09:33Z

Fixed the issue with tests.

commit 21b6907d457591ba0a242e548549d315b38e69f2
Author: Hojjat Jafarpour 
Date:   2016-10-24T16:46:49Z

Made minor changes based on Guozhang's feedback.

commit 3ae6583576b25f8fce4a9f4fac3f4a33bd8f47b6
Author: Hojjat Jafarpour 
Date:   2016-10-24T16:49:23Z

Merge branch 'trunk' into 
KAFKA-4060-Remove-ZkClient-dependency-in-Kafka-Streams-new

commit 253784edc252d80d5ecb3eda8ef506657f659ba1
Author: Hojjat Jafarpour 

[GitHub] kafka pull request #2374: KAFKA-3209: KIP-66: more single message transforms

2017-01-13 Thread shikhar
GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/2374

KAFKA-3209: KIP-66: more single message transforms

WIP, in this PR I'd also like to add doc generation for transformations.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka more-smt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2374.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2374


commit f34cc71c9931ea7ec5dd045512c623196928a2a3
Author: Shikhar Bhushan 
Date:   2017-01-13T20:00:31Z

SetSchemaMetadata SMT

commit 09af66b3f1861bf2a0718aaf79ca1a3bd22adcec
Author: Shikhar Bhushan 
Date:   2017-01-13T21:44:57Z

Support schemaless and rename HoistToStruct->Hoist; add the inverse Extract 
transform

commit 022f4920c5f09d068bbf49e47091a1333dc48be2
Author: Shikhar Bhushan 
Date:   2017-01-13T21:51:43Z

InsertField transform -- fix bad name for interface containing config name 
constants

commit c5260a718e2f0ade66c4607a4b9c21abda61b90c
Author: Shikhar Bhushan 
Date:   2017-01-13T22:01:25Z

ValueToKey SMT




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3209) Support single message transforms in Kafka Connect

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822420#comment-15822420
 ] 

ASF GitHub Bot commented on KAFKA-3209:
---

GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/2374

KAFKA-3209: KIP-66: more single message transforms

WIP, in this PR I'd also like to add doc generation for transformations.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka more-smt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2374.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2374


commit f34cc71c9931ea7ec5dd045512c623196928a2a3
Author: Shikhar Bhushan 
Date:   2017-01-13T20:00:31Z

SetSchemaMetadata SMT

commit 09af66b3f1861bf2a0718aaf79ca1a3bd22adcec
Author: Shikhar Bhushan 
Date:   2017-01-13T21:44:57Z

Support schemaless and rename HoistToStruct->Hoist; add the inverse Extract 
transform

commit 022f4920c5f09d068bbf49e47091a1333dc48be2
Author: Shikhar Bhushan 
Date:   2017-01-13T21:51:43Z

InsertField transform -- fix bad name for interface containing config name 
constants

commit c5260a718e2f0ade66c4607a4b9c21abda61b90c
Author: Shikhar Bhushan 
Date:   2017-01-13T22:01:25Z

ValueToKey SMT




> Support single message transforms in Kafka Connect
> --
>
> Key: KAFKA-3209
> URL: https://issues.apache.org/jira/browse/KAFKA-3209
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Neha Narkhede
>Assignee: Shikhar Bhushan
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> Users should be able to perform light transformations on messages between a 
> connector and Kafka. This is needed because some transformations must be 
> performed before the data hits Kafka (e.g. filtering certain types of events 
> or PII filtering). It's also useful for very light, single-message 
> modifications that are easier to perform inline with the data import/export.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1834

2017-01-13 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4626; Add KafkaConsumer#close change to upgrade notes

--
[...truncated 15725 lines...]
offsetRetention + partitionData.timestamp
^
:576:
 method offsetData in class ListOffsetRequest is deprecated: see corresponding 
Javadoc for more information.
val (authorizedRequestInfo, unauthorizedRequestInfo) = 
offsetRequest.offsetData.asScala.partition {
 ^
:576:
 class PartitionData in object ListOffsetRequest is deprecated: see 
corresponding Javadoc for more information.
val (authorizedRequestInfo, unauthorizedRequestInfo) = 
offsetRequest.offsetData.asScala.partition {

  ^
:581:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  new 
ListOffsetResponse.PartitionData(Errors.UNKNOWN_TOPIC_OR_PARTITION.code, 
List[JLong]().asJava)
  ^
:606:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
(topicPartition, new ListOffsetResponse.PartitionData(Errors.NONE.code, 
offsets.map(new JLong(_)).asJava))
 ^
:613:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e).code, 
List[JLong]().asJava))
   ^
:616:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e).code, 
List[JLong]().asJava))
   ^
:267:
 class PartitionData in object ListOffsetRequest is deprecated: see 
corresponding Javadoc for more information.
val partitions = Map(topicPartition -> new 
ListOffsetRequest.PartitionData(earliestOrLatest, 1))
 ^
:280:
 value offsets in class PartitionData is deprecated: see corresponding Javadoc 
for more information.
  partitionData.offsets.get(0)
^
:45:
 class OldProducer in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.KafkaProducer instead.
new OldProducer(getOldProducerProps(config))
^
:47:
 class NewShinyProducer in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.KafkaProducer instead.
new NewShinyProducer(getNewProducerProps(config))
^
21 warnings found
:core:processResources UP-TO-DATE
:core:classes
:core:copyDependantLibs
:core:jar
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:checkstyleMain
:examples:compileTestJava UP-TO-DATE
:examples:processTestResources UP-TO-DATE
:examples:testClasses UP-TO-DATE
:examples:checkstyleTest UP-TO-DATE
:examples:test UP-TO-DATE
:log4j-appender:compileJava
:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes
:log4j-appender:checkstyleMain
:log4j-appender:compileTestJava
:log4j-appender:processTestResources UP-TO-DATE
:log4j-appender:testClasses
:log4j-appender:checkstyleTest
:log4j-appender:test

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends STARTED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends PASSED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs 
STARTED


[jira] [Commented] (KAFKA-4568) Simplify multiple mechanism support in SaslTestHarness

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822360#comment-15822360
 ] 

ASF GitHub Bot commented on KAFKA-4568:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2373

KAFKA-4568: Simplify test code for multiple SASL mechanisms

Remove workaround for testing multiple SASL mechanisms using 
sasl.jaas.config and the new support for multiple client modules within a JVM.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4568

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2373.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2373


commit 4f6214b09a5058e594eb79f61ab5e3d606caec10
Author: Rajini Sivaram 
Date:   2017-01-13T19:01:53Z

KAFKA-4568: Simplify test code for multiple SASL mechanisms




> Simplify multiple mechanism support in SaslTestHarness
> --
>
> Key: KAFKA-4568
> URL: https://issues.apache.org/jira/browse/KAFKA-4568
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Minor
>
> The dynamic JAAS configuration property in KAFKA-4259 can be used for 
> implementing multiple client mechanisms  in a JVM without the hacks in 
> SaslTestHarness.scala.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2373: KAFKA-4568: Simplify test code for multiple SASL m...

2017-01-13 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2373

KAFKA-4568: Simplify test code for multiple SASL mechanisms

Remove workaround for testing multiple SASL mechanisms using 
sasl.jaas.config and the new support for multiple client modules within a JVM.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4568

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2373.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2373


commit 4f6214b09a5058e594eb79f61ab5e3d606caec10
Author: Rajini Sivaram 
Date:   2017-01-13T19:01:53Z

KAFKA-4568: Simplify test code for multiple SASL mechanisms




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4630) Implement RecordTooLargeException when communicating with pre-KIP-74 brokers

2017-01-13 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-4630:
--

 Summary: Implement RecordTooLargeException when communicating with 
pre-KIP-74 brokers
 Key: KAFKA-4630
 URL: https://issues.apache.org/jira/browse/KAFKA-4630
 Project: Kafka
  Issue Type: Sub-task
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Implement RecordTooLargeException when communicating with pre-KIP-74 brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #1177

2017-01-13 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4626; Add KafkaConsumer#close change to upgrade notes

[jason] KAFKA-4565; Separation of Internal and External traffic (KIP-103)

--
[...truncated 18719 lines...]

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:668)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:349)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns 

[jira] [Commented] (KAFKA-4614) Long GC pause harming broker performance which is caused by mmap objects created for OffsetIndex

2017-01-13 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822254#comment-15822254
 ] 

Guozhang Wang commented on KAFKA-4614:
--

Thanks for this report and thorough analysis [~kawamuray]! 

> Long GC pause harming broker performance which is caused by mmap objects 
> created for OffsetIndex
> 
>
> Key: KAFKA-4614
> URL: https://issues.apache.org/jira/browse/KAFKA-4614
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: Yuto Kawamura
>Assignee: Yuto Kawamura
>  Labels: latency, performance
> Fix For: 0.10.2.0
>
>
> First let me clarify our system environment information as I think it's 
> important to understand this issue:
> OS: CentOS6
> Kernel version: 2.6.32-XX
> Filesystem used for data volume: XFS
> Java version: 1.8.0_66
> GC option: Kafka default(G1GC)
> Kafka version: 0.10.0.1
> h2. Phenomenon
> In our Kafka cluster, an usual response time for Produce request is about 1ms 
> for 50th percentile to 10ms for 99th percentile. All topics are configured to 
> have 3 replicas and all producers are configured {{acks=all}} so this time 
> includes replication latency.
> Sometimes we observe 99th percentile latency get increased to 100ms ~ 500ms 
> but for the most cases the time consuming part is "Remote" which means it is 
> caused by slow replication which is known to happen by various reasons(which 
> is also an issue that we're trying to improve, but out of interest within 
> this issue).
> However, we found that there are some different patterns which happens rarely 
> but stationary 3 ~ 5 times a day for each servers. The interesting part is 
> that "RequestQueue" also got increased as well as "Total" and "Remote".
> At the same time, we observed that the disk read metrics(in terms of both 
> read bytes and read time) spikes exactly for the same moment. Currently we 
> have only caught up consumers so this metric sticks to zero while all 
> requests are served by page cache.
> In order to investigate what Kafka is "read"ing, I employed SystemTap and 
> wrote the following script. It traces all disk reads(only actual read by 
> physical device) made for the data volume by broker process.
> {code}
> global target_pid = KAFKA_PID
> global target_dev = DATA_VOLUME
> probe ioblock.request {
>   if (rw == BIO_READ && pid() == target_pid && devname == target_dev) {
>  t_ms = gettimeofday_ms() + 9 * 3600 * 1000 // timezone adjustment
>  printf("%s,%03d:  tid = %d, device = %s, inode = %d, size = %d\n", 
> ctime(t_ms / 1000), t_ms % 1000, tid(), devname, ino, size)
>  print_backtrace()
>  print_ubacktrace()
>   }
> }
> {code}
> As the result, we could observe many logs like below:
> {code}
> Thu Dec 22 17:21:39 2016,209:  tid = 126123, device = sdb1, inode = -1, size 
> = 4096
>  0x81275050 : generic_make_request+0x0/0x5a0 [kernel]
>  0x81275660 : submit_bio+0x70/0x120 [kernel]
>  0xa036bcaa : _xfs_buf_ioapply+0x16a/0x200 [xfs]
>  0xa036d95f : xfs_buf_iorequest+0x4f/0xe0 [xfs]
>  0xa036db46 : _xfs_buf_read+0x36/0x60 [xfs]
>  0xa036dc1b : xfs_buf_read+0xab/0x100 [xfs]
>  0xa0363477 : xfs_trans_read_buf+0x1f7/0x410 [xfs]
>  0xa033014e : xfs_btree_read_buf_block+0x5e/0xd0 [xfs]
>  0xa0330854 : xfs_btree_lookup_get_block+0x84/0xf0 [xfs]
>  0xa0330edf : xfs_btree_lookup+0xbf/0x470 [xfs]
>  0xa032456f : xfs_bmbt_lookup_eq+0x1f/0x30 [xfs]
>  0xa032628b : xfs_bmap_del_extent+0x12b/0xac0 [xfs]
>  0xa0326f34 : xfs_bunmapi+0x314/0x850 [xfs]
>  0xa034ad79 : xfs_itruncate_extents+0xe9/0x280 [xfs]
>  0xa0366de5 : xfs_inactive+0x2f5/0x450 [xfs]
>  0xa0374620 : xfs_fs_clear_inode+0xa0/0xd0 [xfs]
>  0x811affbc : clear_inode+0xac/0x140 [kernel]
>  0x811b0776 : generic_delete_inode+0x196/0x1d0 [kernel]
>  0x811b0815 : generic_drop_inode+0x65/0x80 [kernel]
>  0x811af662 : iput+0x62/0x70 [kernel]
>  0x37ff2e5347 : munmap+0x7/0x30 [/lib64/libc-2.12.so]
>  0x7ff169ba5d47 : Java_sun_nio_ch_FileChannelImpl_unmap0+0x17/0x50 
> [/usr/jdk1.8.0_66/jre/lib/amd64/libnio.so]
>  0x7ff269a1307e
> {code}
> We took a jstack dump of the broker process and found that tid = 126123 
> corresponds to the thread which is for GC(hex(tid) == nid(Native Thread ID)):
> {code}
> $ grep 0x1ecab /tmp/jstack.dump
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7ff278d0c800 
> nid=0x1ecab in Object.wait() [0x7ff17da11000]
> {code}
> In order to confirm, we enabled {{PrintGCApplicationStoppedTime}} switch and 
> confirmed that in total the time the broker paused is longer than usual, when 
> a broker 

[jira] [Commented] (KAFKA-4381) Add per partition lag metric to KafkaConsumer.

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822239#comment-15822239
 ] 

ASF GitHub Bot commented on KAFKA-4381:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2329


> Add per partition lag metric to KafkaConsumer.
> --
>
> Key: KAFKA-4381
> URL: https://issues.apache.org/jira/browse/KAFKA-4381
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Affects Versions: 0.10.1.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> Currently KafkaConsumer only has a metric of max lag across all the 
> partitions. It would be useful to know per partition lag as well.
> I remember there was a ticket created before but did not find it. So I am 
> creating this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4381) Add per partition lag metric to KafkaConsumer.

2017-01-13 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4381.
--
Resolution: Fixed

Issue resolved by pull request 2329
[https://github.com/apache/kafka/pull/2329]

> Add per partition lag metric to KafkaConsumer.
> --
>
> Key: KAFKA-4381
> URL: https://issues.apache.org/jira/browse/KAFKA-4381
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Affects Versions: 0.10.1.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> Currently KafkaConsumer only has a metric of max lag across all the 
> partitions. It would be useful to know per partition lag as well.
> I remember there was a ticket created before but did not find it. So I am 
> creating this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2329: KAFKA-4381: Add per partition lag metrics to the c...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2329


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2338: MINOR: remove unnecessary store info from Topology...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2338


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-01-13 Thread Dong Lin
Hey Alexey,

Thanks for your review and the alternative approach. Here is my
understanding of your patch. kafka's background threads are used to move
data between replicas. When data movement is triggered, the log will be
rolled and the new logs will be put in the new directory, and background
threads will move segment from old directory to new directory.

It is important to note that KIP-112 is intended to work with KIP-113 to
support JBOD. I think your solution is definitely simpler and better under
the current kafka implementation that a broker will fail if any disk fails.
But I am not sure if we want to allow broker to run with partial disks
failure. Let's say the a replica is being moved from log_dir_old to
log_dir_new and then log_dir_old stops working due to disk failure. How
would your existing patch handles it? To make the scenario a bit more
complicated, let's say the broker is shtudown, log_dir_old's disk fails,
and the broker starts. In this case broker doesn't even know if log_dir_new
has all the data needed for this replica. It becomes a problem if the
broker is elected leader of this partition in this case.

The solution presented in the KIP attempts to handle it by replacing
replica in an atomic version fashion after the log in the new dir has fully
caught up with the log in the old dir. At at time the log can be considered
to exist on only one log directory.

And to answer your question, yes topicpartition.log refers to
topic-paritition/segment.log.

Thanks,
Dong




On Fri, Jan 13, 2017 at 4:12 AM, Alexey Ozeritsky 
wrote:

> Hi,
>
> We have the similar solution that have been working in production since
> 2014. You can see it here: https://github.com/resetius/ka
> fka/commit/20658593e246d2184906879defa2e763c4d413fb
> The idea is very simple
> 1. Disk balancer runs in a separate thread inside scheduler pool.
> 2. It does not touch empty partitions
> 3. Before it moves a partition it forcibly creates new segment on a
> destination disk
> 4. It moves segment by segment from new to old.
> 5. Log class works with segments on both disks
>
> Your approach seems too complicated, moreover it means that you have to
> patch different components of the system
> Could you clarify what do you mean by topicPartition.log? Is it
> topic-paritition/segment.log ?
>
> 12.01.2017, 21:47, "Dong Lin" :
> > Hi all,
> >
> > We created KIP-113: Support replicas movement between log directories.
> > Please find the KIP wiki in the link
> > *https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%
> 3A+Support+replicas+movement+between+log+directories
> >  3A+Support+replicas+movement+between+log+directories>.*
> >
> > This KIP is related to KIP-112
> >  3A+Handle+disk+failure+for+JBOD>:
> > Handle disk failure for JBOD. They are needed in order to support JBOD in
> > Kafka. Please help review the KIP. You feedback is appreciated!
> >
> > Thanks,
> > Dong
>


[jira] [Updated] (KAFKA-4614) Long GC pause harming broker performance which is caused by mmap objects created for OffsetIndex

2017-01-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4614:
---
Fix Version/s: 0.10.2.0

> Long GC pause harming broker performance which is caused by mmap objects 
> created for OffsetIndex
> 
>
> Key: KAFKA-4614
> URL: https://issues.apache.org/jira/browse/KAFKA-4614
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: Yuto Kawamura
>Assignee: Yuto Kawamura
>  Labels: latency, performance
> Fix For: 0.10.2.0
>
>
> First let me clarify our system environment information as I think it's 
> important to understand this issue:
> OS: CentOS6
> Kernel version: 2.6.32-XX
> Filesystem used for data volume: XFS
> Java version: 1.8.0_66
> GC option: Kafka default(G1GC)
> Kafka version: 0.10.0.1
> h2. Phenomenon
> In our Kafka cluster, an usual response time for Produce request is about 1ms 
> for 50th percentile to 10ms for 99th percentile. All topics are configured to 
> have 3 replicas and all producers are configured {{acks=all}} so this time 
> includes replication latency.
> Sometimes we observe 99th percentile latency get increased to 100ms ~ 500ms 
> but for the most cases the time consuming part is "Remote" which means it is 
> caused by slow replication which is known to happen by various reasons(which 
> is also an issue that we're trying to improve, but out of interest within 
> this issue).
> However, we found that there are some different patterns which happens rarely 
> but stationary 3 ~ 5 times a day for each servers. The interesting part is 
> that "RequestQueue" also got increased as well as "Total" and "Remote".
> At the same time, we observed that the disk read metrics(in terms of both 
> read bytes and read time) spikes exactly for the same moment. Currently we 
> have only caught up consumers so this metric sticks to zero while all 
> requests are served by page cache.
> In order to investigate what Kafka is "read"ing, I employed SystemTap and 
> wrote the following script. It traces all disk reads(only actual read by 
> physical device) made for the data volume by broker process.
> {code}
> global target_pid = KAFKA_PID
> global target_dev = DATA_VOLUME
> probe ioblock.request {
>   if (rw == BIO_READ && pid() == target_pid && devname == target_dev) {
>  t_ms = gettimeofday_ms() + 9 * 3600 * 1000 // timezone adjustment
>  printf("%s,%03d:  tid = %d, device = %s, inode = %d, size = %d\n", 
> ctime(t_ms / 1000), t_ms % 1000, tid(), devname, ino, size)
>  print_backtrace()
>  print_ubacktrace()
>   }
> }
> {code}
> As the result, we could observe many logs like below:
> {code}
> Thu Dec 22 17:21:39 2016,209:  tid = 126123, device = sdb1, inode = -1, size 
> = 4096
>  0x81275050 : generic_make_request+0x0/0x5a0 [kernel]
>  0x81275660 : submit_bio+0x70/0x120 [kernel]
>  0xa036bcaa : _xfs_buf_ioapply+0x16a/0x200 [xfs]
>  0xa036d95f : xfs_buf_iorequest+0x4f/0xe0 [xfs]
>  0xa036db46 : _xfs_buf_read+0x36/0x60 [xfs]
>  0xa036dc1b : xfs_buf_read+0xab/0x100 [xfs]
>  0xa0363477 : xfs_trans_read_buf+0x1f7/0x410 [xfs]
>  0xa033014e : xfs_btree_read_buf_block+0x5e/0xd0 [xfs]
>  0xa0330854 : xfs_btree_lookup_get_block+0x84/0xf0 [xfs]
>  0xa0330edf : xfs_btree_lookup+0xbf/0x470 [xfs]
>  0xa032456f : xfs_bmbt_lookup_eq+0x1f/0x30 [xfs]
>  0xa032628b : xfs_bmap_del_extent+0x12b/0xac0 [xfs]
>  0xa0326f34 : xfs_bunmapi+0x314/0x850 [xfs]
>  0xa034ad79 : xfs_itruncate_extents+0xe9/0x280 [xfs]
>  0xa0366de5 : xfs_inactive+0x2f5/0x450 [xfs]
>  0xa0374620 : xfs_fs_clear_inode+0xa0/0xd0 [xfs]
>  0x811affbc : clear_inode+0xac/0x140 [kernel]
>  0x811b0776 : generic_delete_inode+0x196/0x1d0 [kernel]
>  0x811b0815 : generic_drop_inode+0x65/0x80 [kernel]
>  0x811af662 : iput+0x62/0x70 [kernel]
>  0x37ff2e5347 : munmap+0x7/0x30 [/lib64/libc-2.12.so]
>  0x7ff169ba5d47 : Java_sun_nio_ch_FileChannelImpl_unmap0+0x17/0x50 
> [/usr/jdk1.8.0_66/jre/lib/amd64/libnio.so]
>  0x7ff269a1307e
> {code}
> We took a jstack dump of the broker process and found that tid = 126123 
> corresponds to the thread which is for GC(hex(tid) == nid(Native Thread ID)):
> {code}
> $ grep 0x1ecab /tmp/jstack.dump
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7ff278d0c800 
> nid=0x1ecab in Object.wait() [0x7ff17da11000]
> {code}
> In order to confirm, we enabled {{PrintGCApplicationStoppedTime}} switch and 
> confirmed that in total the time the broker paused is longer than usual, when 
> a broker experiencing the issue.
> {code}
> $ grep --text 'Total time for which 

[GitHub] kafka pull request #2308: kafka-3739:Add no-arg constructor for library prov...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2308


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3739) Add no-arg constructor for library provided serdes

2017-01-13 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3739.
--
   Resolution: Fixed
Fix Version/s: 0.10.2.0

Issue resolved by pull request 2308
[https://github.com/apache/kafka/pull/2308]

> Add no-arg constructor for library provided serdes
> --
>
> Key: KAFKA-3739
> URL: https://issues.apache.org/jira/browse/KAFKA-3739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: huxi
>  Labels: newbie, user-experience
> Fix For: 0.10.2.0
>
>
> We need to add the no-arg constructor explicitly for those library-provided 
> serdes such as {{WindowedSerde}} that already have constructors with 
> arguments. Otherwise they cannot be used through configs which are expecting 
> to construct them via reflections with no-arg constructors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2362: HOTFIX: Added another broker to smoke test

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2362


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4627) Intermittent test failure in consumer close test

2017-01-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4627.

Resolution: Fixed

Issue resolved by pull request 2367
[https://github.com/apache/kafka/pull/2367]

> Intermittent test failure in consumer close test
> 
>
> Key: KAFKA-4627
> URL: https://issues.apache.org/jira/browse/KAFKA-4627
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Jenkins PR build failure: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/830/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testLeaveGroupTimeout/
> {quote}
> java.lang.IllegalArgumentException: No requests available to node 
> localhost:1969 (id: 2147483647 rack: null)
>   at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:209)
>   at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:195)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.consumerCloseTest(KafkaConsumerTest.java:1222)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testLeaveGroupTimeout(KafkaConsumerTest.java:1148)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4627) Intermittent test failure in consumer close test

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822189#comment-15822189
 ] 

ASF GitHub Bot commented on KAFKA-4627:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2367


> Intermittent test failure in consumer close test
> 
>
> Key: KAFKA-4627
> URL: https://issues.apache.org/jira/browse/KAFKA-4627
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Jenkins PR build failure: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/830/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testLeaveGroupTimeout/
> {quote}
> java.lang.IllegalArgumentException: No requests available to node 
> localhost:1969 (id: 2147483647 rack: null)
>   at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:209)
>   at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:195)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.consumerCloseTest(KafkaConsumerTest.java:1222)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testLeaveGroupTimeout(KafkaConsumerTest.java:1148)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2367: KAFKA-4627: Fix timing issue in consumer close tes...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2367


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2368: MINOR: Remove unused CONSUMER_APIS and PRODUCER_AP...

2017-01-13 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/2368


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2372: MINOR: Remove unneeded client used API lists

2017-01-13 Thread hachikuji
Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/2372


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2372: MINOR: Remove unneeded client used API lists

2017-01-13 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2372

MINOR: Remove unneeded client used API lists



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka minor-cleanup-used-apis

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2372.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2372


commit 02d839669a1666b8f8ab5b3fc5fbaf497e857955
Author: Jason Gustafson 
Date:   2017-01-13T18:46:14Z

MINOR: Remove unneeded client used API lists




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4125) Provide low-level Processor API meta data in DSL layer

2017-01-13 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4125:
---
Assignee: Bill Bejeck  (was: Guozhang Wang)

> Provide low-level Processor API meta data in DSL layer
> --
>
> Key: KAFKA-4125
> URL: https://issues.apache.org/jira/browse/KAFKA-4125
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Minor
>
> For Processor API, user can get meta data like record offset, timestamp etc 
> via the provided {{Context}} object. It might be useful to allow uses to 
> access this information in DSL layer, too.
> The idea would be, to do it "the Flink way", ie, by providing
> RichFunctions; {{mapValue()}} for example.
> Is takes a {{ValueMapper}} that only has method
> {noformat}
> V2 apply(V1 value);
> {noformat}
> Thus, you cannot get any meta data within apply (it's completely "blind").
> We would add two more interfaces: {{RichFunction}} with a method
> {{open(Context context)}} and
> {noformat}
> RichValueMapper extends ValueMapper, RichFunction
> {noformat}
> This way, the user can chose to implement Rich- or Standard-function and
> we do not need to change existing APIs. Both can be handed into
> {{KStream.mapValues()}} for example. Internally, we check if a Rich
> function is provided, and if yes, hand in the {{Context}} object once, to
> make it available to the user who can now access it within {{apply()}} -- or
> course, the user must set a member variable in {{open()}} to hold the
> reference to the Context object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2370: Cluster file generator should produce valid json

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2370


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-4125) Provide low-level Processor API meta data in DSL layer

2017-01-13 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822095#comment-15822095
 ] 

Matthias J. Sax edited comment on KAFKA-4125 at 1/13/17 6:21 PM:
-

Hey [~bbejeck]. I thus thought about this, and we should have some API 
discussion before you get started -- don't want you to waste your time.

The point is, that we do have {{transform}}, and {{transformValues}} in the API 
already, and we consider adding {{flatTransform}} and {{flatTransformValues}} 
similar to {{map}}, {{flatMap}} and {{mapValues}}, {{flatMapValues}}. Those 
functions allow to access all record meta data -- thus adding {{RichFunctions}} 
might be redundant. Even if "transform" was added to support stateful 
operators, you can use the in a stateless fashion, too.

This relates also to the idea to add parameter {{key}} to {{mapValues}} -- not 
sure if we need/want to add this.

I think it would be helpful to start a general API design discussion to see 
what we actually want to add and what not. WDYT? \cc [~guozhang] Right now, all 
those ideas are spread out over multiple JIRAs and I think we should 
consolidate all those ideas to get a sound API change instead of "fixing" 
random stuff here and there.


was (Author: mjsax):
Hey [~bbejeck]. I thus thought about this, and we should have some API 
discussion before you get started -- don't want you to waste your time.

The point is, that we do have `transform`, and `transformValues` in the API 
already, and we consider adding `flatTransform` and `flatTransformValues` 
similar to `map`, `flatMap` and `mapValues`, `flatMapValues`. Those functions 
allow to access all record meta data -- thus adding `RichFunctions` might be 
redundant. Even if "transform" was added to support stateful operators, you can 
use the in a stateless fashion, too.

This relates also to the idea to add parameter `key` to `mapValues` -- not sure 
if we need/want to add this.

I think it would be helpful to start a general API design discussion to see 
what we actually want to add and what not. WDYT? \cc [~guozhang] Right now, all 
those ideas are spread out over multiple JIRAs and I think we should 
consolidate all those ideas to get a sound API change instead of "fixing" 
random stuff here and there.

> Provide low-level Processor API meta data in DSL layer
> --
>
> Key: KAFKA-4125
> URL: https://issues.apache.org/jira/browse/KAFKA-4125
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Minor
>
> For Processor API, user can get meta data like record offset, timestamp etc 
> via the provided {{Context}} object. It might be useful to allow uses to 
> access this information in DSL layer, too.
> The idea would be, to do it "the Flink way", ie, by providing
> RichFunctions; {{mapValue()}} for example.
> Is takes a {{ValueMapper}} that only has method
> {noformat}
> V2 apply(V1 value);
> {noformat}
> Thus, you cannot get any meta data within apply (it's completely "blind").
> We would add two more interfaces: {{RichFunction}} with a method
> {{open(Context context)}} and
> {noformat}
> RichValueMapper extends ValueMapper, RichFunction
> {noformat}
> This way, the user can chose to implement Rich- or Standard-function and
> we do not need to change existing APIs. Both can be handed into
> {{KStream.mapValues()}} for example. Internally, we check if a Rich
> function is provided, and if yes, hand in the {{Context}} object once, to
> make it available to the user who can now access it within {{apply()}} -- or
> course, the user must set a member variable in {{open()}} to hold the
> reference to the Context object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4125) Provide low-level Processor API meta data in DSL layer

2017-01-13 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822095#comment-15822095
 ] 

Matthias J. Sax commented on KAFKA-4125:


Hey [~bbejeck]. I thus thought about this, and we should have some API 
discussion before you get started -- don't want you to waste your time.

The point is, that we do have `transform`, and `transformValues` in the API 
already, and we consider adding `flatTransform` and `flatTransformValues` 
similar to `map`, `flatMap` and `mapValues`, `flatMapValues`. Those functions 
allow to access all record meta data -- thus adding `RichFunctions` might be 
redundant. Even if "transform" was added to support stateful operators, you can 
use the in a stateless fashion, too.

This relates also to the idea to add parameter `key` to `mapValues` -- not sure 
if we need/want to add this.

I think it would be helpful to start a general API design discussion to see 
what we actually want to add and what not. WDYT? \cc [~guozhang] Right now, all 
those ideas are spread out over multiple JIRAs and I think we should 
consolidate all those ideas to get a sound API change instead of "fixing" 
random stuff here and there.

> Provide low-level Processor API meta data in DSL layer
> --
>
> Key: KAFKA-4125
> URL: https://issues.apache.org/jira/browse/KAFKA-4125
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Minor
>
> For Processor API, user can get meta data like record offset, timestamp etc 
> via the provided {{Context}} object. It might be useful to allow uses to 
> access this information in DSL layer, too.
> The idea would be, to do it "the Flink way", ie, by providing
> RichFunctions; {{mapValue()}} for example.
> Is takes a {{ValueMapper}} that only has method
> {noformat}
> V2 apply(V1 value);
> {noformat}
> Thus, you cannot get any meta data within apply (it's completely "blind").
> We would add two more interfaces: {{RichFunction}} with a method
> {{open(Context context)}} and
> {noformat}
> RichValueMapper extends ValueMapper, RichFunction
> {noformat}
> This way, the user can chose to implement Rich- or Standard-function and
> we do not need to change existing APIs. Both can be handed into
> {{KStream.mapValues()}} for example. Internally, we check if a Rich
> function is provided, and if yes, hand in the {{Context}} object once, to
> make it available to the user who can now access it within {{apply()}} -- or
> course, the user must set a member variable in {{open()}} to hold the
> reference to the Context object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4588) QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable is occasionally failing on jenkins

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822084#comment-15822084
 ] 

ASF GitHub Bot commented on KAFKA-4588:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2371

KAFKA-4588: Wait for topics to be created in 
QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable

After debugging this i can see the times that it fails there is a race 
between when the topic is actually created/ready on the broker and when the 
assignment happens. When it fails `StreamPartitionAssignor.assign(..)` gets 
called with a `Cluster` with no topics. Hence the test hangs as no tasks get 
assigned. To fix this I added a `waitForTopics` method to 
`EmbeddedKafkaCluster`. This will wait until the topics have been created.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka integration-test-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2371.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2371


commit 52f5792a41878a5decf26ad011178737096e0933
Author: Damian Guy 
Date:   2017-01-13T16:51:49Z

metadata hack

commit c85facef85cfcfd25c7cae3560f4a098f0a97b92
Author: Damian Guy 
Date:   2017-01-13T18:07:48Z

wait for topics to be created




> QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable
>  is occasionally failing on jenkins
> ---
>
> Key: KAFKA-4588
> URL: https://issues.apache.org/jira/browse/KAFKA-4588
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for store count-by-key
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:502)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2371: KAFKA-4588: Wait for topics to be created in Query...

2017-01-13 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2371

KAFKA-4588: Wait for topics to be created in 
QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable

After debugging this i can see the times that it fails there is a race 
between when the topic is actually created/ready on the broker and when the 
assignment happens. When it fails `StreamPartitionAssignor.assign(..)` gets 
called with a `Cluster` with no topics. Hence the test hangs as no tasks get 
assigned. To fix this I added a `waitForTopics` method to 
`EmbeddedKafkaCluster`. This will wait until the topics have been created.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka integration-test-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2371.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2371


commit 52f5792a41878a5decf26ad011178737096e0933
Author: Damian Guy 
Date:   2017-01-13T16:51:49Z

metadata hack

commit c85facef85cfcfd25c7cae3560f4a098f0a97b92
Author: Damian Guy 
Date:   2017-01-13T18:07:48Z

wait for topics to be created




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4565) Separation of Internal and External traffic (KIP-103)

2017-01-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4565:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2354
[https://github.com/apache/kafka/pull/2354]

> Separation of Internal and External traffic (KIP-103)
> -
>
> Key: KAFKA-4565
> URL: https://issues.apache.org/jira/browse/KAFKA-4565
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> During the 0.9.0.0 release cycle, support for multiple listeners per broker 
> was introduced (KAFKA-1809). Each listener is associated with a security 
> protocol, ip/host and port. When combined with the advertised listeners 
> mechanism, there is a fair amount of flexibility with one limitation: at most 
> one listener per security protocol in each of the two configs (listeners and 
> advertised.listeners).
> In some environments, one may want to differentiate between external clients, 
> internal clients and replication traffic independently of the security 
> protocol for cost, performance and security reasons. See the KIP for more 
> details: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4565) Separation of Internal and External traffic (KIP-103)

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822078#comment-15822078
 ] 

ASF GitHub Bot commented on KAFKA-4565:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2354


> Separation of Internal and External traffic (KIP-103)
> -
>
> Key: KAFKA-4565
> URL: https://issues.apache.org/jira/browse/KAFKA-4565
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> During the 0.9.0.0 release cycle, support for multiple listeners per broker 
> was introduced (KAFKA-1809). Each listener is associated with a security 
> protocol, ip/host and port. When combined with the advertised listeners 
> mechanism, there is a fair amount of flexibility with one limitation: at most 
> one listener per security protocol in each of the two configs (listeners and 
> advertised.listeners).
> In some environments, one may want to differentiate between external clients, 
> internal clients and replication traffic independently of the security 
> protocol for cost, performance and security reasons. See the KIP for more 
> details: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2354: KAFKA-4565: Separation of Internal and External tr...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2354


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-3745) Consider adding join key to ValueJoiner interface

2017-01-13 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812244#comment-15812244
 ] 

Matthias J. Sax edited comment on KAFKA-3745 at 1/13/17 6:06 PM:
-

[~sree2k] thanks for picking this up. Just wanted to point out, as this JIRA 
contains a public API change, you will need to write a KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

It should not be a big deal to write the KIP -- if you need any help, just let 
me know (matth...@confluent.io) and I can guide you through the process.


was (Author: mjsax):
[~sree2k] thanks for picking this up. Just wanted to point out, as this JIRA 
contains a public API change, you will need to write a KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

It should not be a big dead to write the KIP -- if you need any help, just let 
me know (matth...@confluent.io) and I can guide you through the process.

> Consider adding join key to ValueJoiner interface
> -
>
> Key: KAFKA-3745
> URL: https://issues.apache.org/jira/browse/KAFKA-3745
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Sreepathi Prasanna
>Priority: Minor
>  Labels: api, needs-kip, newbie
>
> In working with Kafka Stream joining, it's sometimes the case that a join key 
> is not actually present in the values of the joins themselves (if, for 
> example, a previous transform generated an ephemeral join key.) In such 
> cases, the actual key of the join is not available in the ValueJoiner 
> implementation to be used to construct the final joined value. This can be 
> worked around by explicitly threading the join key into the value if needed, 
> but it seems like extending the interface to pass the join key along as well 
> would be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2370: Cluster file generator should produce valid json

2017-01-13 Thread 0x0ece
GitHub user 0x0ece opened a pull request:

https://github.com/apache/kafka/pull/2370

Cluster file generator should produce valid json

This way, if the ${KAFKA_NUM_CONTAINERS} is changed in docker/run_tests.sh, 
the json is still valid

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/0x0ece/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2370.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2370


commit 3f02efe25814fb44f59240a231d3a4c7b81ba048
Author: Emanuele Cesena 
Date:   2017-01-13T17:42:38Z

Cluster file generator should produce valid json

This way, if the ${KAFKA_NUM_CONTAINERS} is changed in docker/run_tests.sh, 
the json is still valid




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4626) Add consumer close change to upgrade notes

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822037#comment-15822037
 ] 

ASF GitHub Bot commented on KAFKA-4626:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2366


> Add consumer close change to upgrade notes
> --
>
> Key: KAFKA-4626
> URL: https://issues.apache.org/jira/browse/KAFKA-4626
> Project: Kafka
>  Issue Type: Task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> KafkaConsumer#close() takes longer to close because of the graceful close 
> changes from KIP-102 (KAFKA-4426). Add this to upgrade notes for 0.10.2.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4626) Add consumer close change to upgrade notes

2017-01-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4626.

Resolution: Fixed

Issue resolved by pull request 2366
[https://github.com/apache/kafka/pull/2366]

> Add consumer close change to upgrade notes
> --
>
> Key: KAFKA-4626
> URL: https://issues.apache.org/jira/browse/KAFKA-4626
> Project: Kafka
>  Issue Type: Task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> KafkaConsumer#close() takes longer to close because of the graceful close 
> changes from KIP-102 (KAFKA-4426). Add this to upgrade notes for 0.10.2.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2366: KAFKA-4626: Add KafkaConsumer#close change to upgr...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2366


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4622) KafkaConsumer does not properly handle authorization errors from offset fetches

2017-01-13 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822022#comment-15822022
 ] 

Jason Gustafson commented on KAFKA-4622:


[~ewencp] Should be an easy fix, so I was planning to get it into 0.10.2.0. I'm 
just awaiting KIP-88 since the patch would likely conflict.

> KafkaConsumer does not properly handle authorization errors from offset 
> fetches
> ---
>
> Key: KAFKA-4622
> URL: https://issues.apache.org/jira/browse/KAFKA-4622
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.1, 0.10.1.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.2.0
>
>
> It's possible to receive both group and topic authorization exceptions from 
> an OffsetFetch, but the consumer currently treats this as generic unexpected 
> errors. We should probably return {{GroupAuthorizationException}} and 
> {{TopicAuthorizationException}} to be consistent with the other consumer 
> APIs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2369: KAFKA-4589: SASL/SCRAM documentation

2017-01-13 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2369

KAFKA-4589: SASL/SCRAM documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4589

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2369.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2369


commit a05b72829ae4d4ac7af00141cf8e15944845b6f3
Author: Rajini Sivaram 
Date:   2017-01-05T14:00:18Z

KAFKA-4363: Documentation for sasl.jaas.config property

commit ef87a90f88cf213256e667bfb5125a23e67a1249
Author: Rajini Sivaram 
Date:   2017-01-13T16:11:56Z

KAFKA-4589: SASL/SCRAM documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3537) Provide access to low-level Metrics in ProcessorContext

2017-01-13 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822002#comment-15822002
 ] 

Guozhang Wang commented on KAFKA-3537:
--

[~mdcoon1] In the coming 0.10.2.0 release users can get the metrics registry 
from {{context().metrics().metrics()}}, just FYI.

> Provide access to low-level Metrics in ProcessorContext
> ---
>
> Key: KAFKA-3537
> URL: https://issues.apache.org/jira/browse/KAFKA-3537
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.9.0.1
>Reporter: Michael Coon
>Assignee: Eno Thereska
>Priority: Minor
>  Labels: semantics
> Fix For: 0.10.2.0
>
>
> It would be good to have access to the underlying Metrics component in 
> StreamMetrics. StreamMetrics forces a naming convention for metrics that does 
> not fit our use case for reporting. We need to be able to convert the stream 
> metrics to our own metrics formatting and it's cumbersome to extract group/op 
> names from pre-formatted strings the way they are setup in StreamMetricsImpl. 
> If there were a "metrics()" method of StreamMetrics to give me the underlying 
> Metrics object, I could register my own sensors/metrics as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4629) Per topic MBeans leak

2017-01-13 Thread Alberto Forti (JIRA)
Alberto Forti created KAFKA-4629:


 Summary: Per topic MBeans leak
 Key: KAFKA-4629
 URL: https://issues.apache.org/jira/browse/KAFKA-4629
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.1
Reporter: Alberto Forti
Priority: Minor


Hi,

In our application we create and delete topics dynamically. Most of the times 
when a topic is deleted the related MBeans are not deleted. Example of MBean: 
kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec,topic=dw_06b5f828-e452-4e22-89c9-67849a65603d

Also, deleting a topic often produces (what I think is) noise in the logs at 
WARN level. One example is:
WARN  PartitionStateMachine$DeleteTopicsListener:83 - [DeleteTopicsListener on 
1]: Ignoring request to delete non-existing topics 
dw_fe8ff14b-aa9b-4f24-9bc1-6fbce15d20d2

Easy reproducible with a basic Kafka cluster with two brokers, just create and 
delete topics few times. Sometimes the MBeans for the topic are deleted and 
sometimes are not.

I'm creating and deleting topics using the AdminUtils class in the Java API:
AdminUtils.deleteTopic(zkUtils, topicName);
AdminUtils.createTopic(zkUtils, topicName, partitions, replicationFactor, 
topicConfiguration, kafka.admin.RackAwareMode.Enforced$.MODULE$);

Kafka version: 0.10.0.1 (haven't tried other versions)

Thanks,
Alberto




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3537) Provide access to low-level Metrics in ProcessorContext

2017-01-13 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3537.
--
Resolution: Fixed

Resolved as part of PR 1446.

> Provide access to low-level Metrics in ProcessorContext
> ---
>
> Key: KAFKA-3537
> URL: https://issues.apache.org/jira/browse/KAFKA-3537
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.9.0.1
>Reporter: Michael Coon
>Assignee: Eno Thereska
>Priority: Minor
>  Labels: semantics
> Fix For: 0.10.2.0
>
>
> It would be good to have access to the underlying Metrics component in 
> StreamMetrics. StreamMetrics forces a naming convention for metrics that does 
> not fit our use case for reporting. We need to be able to convert the stream 
> metrics to our own metrics formatting and it's cumbersome to extract group/op 
> names from pre-formatted strings the way they are setup in StreamMetricsImpl. 
> If there were a "metrics()" method of StreamMetrics to give me the underlying 
> Metrics object, I could register my own sensors/metrics as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4589) Add documentation for SASL/SCRAM

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821996#comment-15821996
 ] 

ASF GitHub Bot commented on KAFKA-4589:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2369

KAFKA-4589: SASL/SCRAM documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4589

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2369.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2369


commit a05b72829ae4d4ac7af00141cf8e15944845b6f3
Author: Rajini Sivaram 
Date:   2017-01-05T14:00:18Z

KAFKA-4363: Documentation for sasl.jaas.config property

commit ef87a90f88cf213256e667bfb5125a23e67a1249
Author: Rajini Sivaram 
Date:   2017-01-13T16:11:56Z

KAFKA-4589: SASL/SCRAM documentation




> Add documentation for SASL/SCRAM
> 
>
> Key: KAFKA-4589
> URL: https://issues.apache.org/jira/browse/KAFKA-4589
> Project: Kafka
>  Issue Type: Task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Add documentation for SASL/SCRAM. This corresponds to 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms,
>  KAFKA-3751.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4628) Support KTable/GlobalKTable Joins

2017-01-13 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-4628:
-

 Summary: Support KTable/GlobalKTable Joins
 Key: KAFKA-4628
 URL: https://issues.apache.org/jira/browse/KAFKA-4628
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Damian Guy
 Fix For: 0.10.3.0


In KIP-99 we have added support for GlobalKTables, however we don't currently 
support KTable/GlobalKTable joins as they require materializing a state store 
for the join. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1833

2017-01-13 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4581; Fail early if multiple client login modules in

--
[...truncated 18570 lines...]
org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled PASSED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldThrowNoSuchElementOnNextIfNoNext STARTED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldThrowNoSuchElementOnNextIfNoNext PASSED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldThrowNoSuchElementOnPeekNextKeyIfNoNext STARTED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldThrowNoSuchElementOnPeekNextKeyIfNoNext PASSED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldIterateOverAllSegments STARTED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldIterateOverAllSegments PASSED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldOnlyIterateOverSegmentsInRange STARTED

org.apache.kafka.streams.state.internals.SegmentIteratorTest > 
shouldOnlyIterateOverSegmentsInRange PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged PASSED


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-13 Thread Jeff Holoman
Well done Grant!  Congrats!

On Thu, Jan 12, 2017 at 1:13 PM, Joel Koshy  wrote:

> Hey Grant - congrats!
>
> On Thu, Jan 12, 2017 at 10:00 AM, Neha Narkhede  wrote:
>
> > Congratulations, Grant. Well deserved!
> >
> > On Thu, Jan 12, 2017 at 7:51 AM Grant Henke  wrote:
> >
> > > Thanks everyone!
> > >
> > > On Thu, Jan 12, 2017 at 2:58 AM, Damian Guy 
> > wrote:
> > >
> > > > Congratulations!
> > > >
> > > > On Thu, 12 Jan 2017 at 03:35 Jun Rao  wrote:
> > > >
> > > > > Grant,
> > > > >
> > > > > Thanks for all your contribution! Congratulations!
> > > > >
> > > > > Jun
> > > > >
> > > > > On Wed, Jan 11, 2017 at 2:51 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > > > The PMC for Apache Kafka has invited Grant Henke to join as a
> > > > > > committer and we are pleased to announce that he has accepted!
> > > > > >
> > > > > > Grant contributed 88 patches, 90 code reviews, countless great
> > > > > > comments on discussions, a much-needed cleanup to our protocol
> and
> > > the
> > > > > > on-going and critical work on the Admin protocol. Throughout
> this,
> > he
> > > > > > displayed great technical judgment, high-quality work and
> > willingness
> > > > > > to contribute where needed to make Apache Kafka awesome.
> > > > > >
> > > > > > Thank you for your contributions, Grant :)
> > > > > >
> > > > > > --
> > > > > > Gwen Shapira
> > > > > > Product Manager | Confluent
> > > > > > 650.450.2760 <(650)%20450-2760> <(650)%20450-2760> | @gwenshap
> > > > > > Follow us: Twitter | blog
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Grant Henke
> > > Software Engineer | Cloudera
> > > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> > >
> > --
> > Thanks,
> > Neha
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #1176

2017-01-13 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4581; Fail early if multiple client login modules in

--
[...truncated 35096 lines...]

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:668)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:349)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 

[GitHub] kafka pull request #2368: MINOR: Remove unused CONSUMER_APIS and PRODUCER_AP...

2017-01-13 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2368

MINOR: Remove unused CONSUMER_APIS and PRODUCER_APIS



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka remove-unused-apis-field

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2368.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2368


commit 6c3b40376712bd4dc0d88361393354d2278c42c8
Author: Ismael Juma 
Date:   2017-01-13T12:39:29Z

Remove unused CONSUMER_APIS and PRODUCER_APIS




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2367: KAFKA-4627: Fix timing issue in consumer close tes...

2017-01-13 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2367

 KAFKA-4627: Fix timing issue in consumer close tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4627

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2367.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2367


commit 77751def3e8c87344e206d8453248328fddc5b12
Author: Rajini Sivaram 
Date:   2017-01-13T12:12:45Z

KAFKA-4627: Fix timing issue in consumer close tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4627) Intermittent test failure in consumer close test

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821718#comment-15821718
 ] 

ASF GitHub Bot commented on KAFKA-4627:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2367

 KAFKA-4627: Fix timing issue in consumer close tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4627

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2367.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2367


commit 77751def3e8c87344e206d8453248328fddc5b12
Author: Rajini Sivaram 
Date:   2017-01-13T12:12:45Z

KAFKA-4627: Fix timing issue in consumer close tests




> Intermittent test failure in consumer close test
> 
>
> Key: KAFKA-4627
> URL: https://issues.apache.org/jira/browse/KAFKA-4627
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Jenkins PR build failure: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/830/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testLeaveGroupTimeout/
> {quote}
> java.lang.IllegalArgumentException: No requests available to node 
> localhost:1969 (id: 2147483647 rack: null)
>   at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:209)
>   at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:195)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.consumerCloseTest(KafkaConsumerTest.java:1222)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testLeaveGroupTimeout(KafkaConsumerTest.java:1148)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-01-13 Thread Alexey Ozeritsky
Hi,

We have the similar solution that have been working in production since 2014. 
You can see it here: 
https://github.com/resetius/kafka/commit/20658593e246d2184906879defa2e763c4d413fb
The idea is very simple
1. Disk balancer runs in a separate thread inside scheduler pool.
2. It does not touch empty partitions
3. Before it moves a partition it forcibly creates new segment on a destination 
disk
4. It moves segment by segment from new to old.
5. Log class works with segments on both disks

Your approach seems too complicated, moreover it means that you have to patch 
different components of the system
Could you clarify what do you mean by topicPartition.log? Is it 
topic-paritition/segment.log ?

12.01.2017, 21:47, "Dong Lin" :
> Hi all,
>
> We created KIP-113: Support replicas movement between log directories.
> Please find the KIP wiki in the link
> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories
> .*
>
> This KIP is related to KIP-112
> :
> Handle disk failure for JBOD. They are needed in order to support JBOD in
> Kafka. Please help review the KIP. You feedback is appreciated!
>
> Thanks,
> Dong


[jira] [Created] (KAFKA-4627) Intermittent test failure in consumer close test

2017-01-13 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4627:
-

 Summary: Intermittent test failure in consumer close test
 Key: KAFKA-4627
 URL: https://issues.apache.org/jira/browse/KAFKA-4627
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 0.10.2.0


Jenkins PR build failure: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/830/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testLeaveGroupTimeout/

{quote}
java.lang.IllegalArgumentException: No requests available to node 
localhost:1969 (id: 2147483647 rack: null)
at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:209)
at org.apache.kafka.clients.MockClient.respondFrom(MockClient.java:195)
at 
org.apache.kafka.clients.consumer.KafkaConsumerTest.consumerCloseTest(KafkaConsumerTest.java:1222)
at 
org.apache.kafka.clients.consumer.KafkaConsumerTest.testLeaveGroupTimeout(KafkaConsumerTest.java:1148)
{quote}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4581) Fail gracefully if multiple login modules are specified in sasl.jaas.config

2017-01-13 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4581.

Resolution: Fixed

Issue resolved by pull request 2356
[https://github.com/apache/kafka/pull/2356]

> Fail gracefully if multiple login modules are specified in sasl.jaas.config
> ---
>
> Key: KAFKA-4581
> URL: https://issues.apache.org/jira/browse/KAFKA-4581
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Validate config and throw meaningful exception if multiple login modules are 
> specified for client in sasl.jaas.config property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4581) Fail gracefully if multiple login modules are specified in sasl.jaas.config

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821577#comment-15821577
 ] 

ASF GitHub Bot commented on KAFKA-4581:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2356


> Fail gracefully if multiple login modules are specified in sasl.jaas.config
> ---
>
> Key: KAFKA-4581
> URL: https://issues.apache.org/jira/browse/KAFKA-4581
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Validate config and throw meaningful exception if multiple login modules are 
> specified for client in sasl.jaas.config property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2356: KAFKA-4581: Fail early if multiple client login mo...

2017-01-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2356


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4626) Add consumer close change to upgrade notes

2017-01-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821564#comment-15821564
 ] 

ASF GitHub Bot commented on KAFKA-4626:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2366

KAFKA-4626: Add KafkaConsumer#close change to upgrade notes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4626

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2366


commit 7d72d77e255f40067691442cdd57cf51f07fd698
Author: Rajini Sivaram 
Date:   2017-01-13T10:23:02Z

KAFKA-4626: Add KafkaConsumer#close change to upgrade notes




> Add consumer close change to upgrade notes
> --
>
> Key: KAFKA-4626
> URL: https://issues.apache.org/jira/browse/KAFKA-4626
> Project: Kafka
>  Issue Type: Task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> KafkaConsumer#close() takes longer to close because of the graceful close 
> changes from KIP-102 (KAFKA-4426). Add this to upgrade notes for 0.10.2.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2366: KAFKA-4626: Add KafkaConsumer#close change to upgr...

2017-01-13 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2366

KAFKA-4626: Add KafkaConsumer#close change to upgrade notes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4626

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2366


commit 7d72d77e255f40067691442cdd57cf51f07fd698
Author: Rajini Sivaram 
Date:   2017-01-13T10:23:02Z

KAFKA-4626: Add KafkaConsumer#close change to upgrade notes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4626) Add consumer close change to upgrade notes

2017-01-13 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4626:
-

 Summary: Add consumer close change to upgrade notes
 Key: KAFKA-4626
 URL: https://issues.apache.org/jira/browse/KAFKA-4626
 Project: Kafka
  Issue Type: Task
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 0.10.2.0


KafkaConsumer#close() takes longer to close because of the graceful close 
changes from KIP-102 (KAFKA-4426). Add this to upgrade notes for 0.10.2.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4616) Message loss is seen when kafka-producer-perf-test.sh is running and any broker restarted in middle

2017-01-13 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821473#comment-15821473
 ] 

huxi commented on KAFKA-4616:
-

They are not missing, but are just not delivered to Kafka successfully.
bq. The guarantee that Kafka offers is that a committed message will not be 
lost, as long as there is at least one in sync replica alive, at all times.
To avoid your "data loss", try to append "retries=" 
to the command, although you might see some repeated produced messages. 

> Message loss is seen when kafka-producer-perf-test.sh is running and any 
> broker restarted in middle
> ---
>
> Key: KAFKA-4616
> URL: https://issues.apache.org/jira/browse/KAFKA-4616
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
> Environment: Apache mesos
>Reporter: sandeep kumar singh
>
> if any broker is restarted while kafka-producer-perf-test.sh command is 
> running, we see message loss.
> commands i run:
> **perf command:
> $ bin/kafka-producer-perf-test.sh --num-records 10 --record-size 4096  
> --throughput 1000 --topic test3R3P3 --producer-props 
> bootstrap.servers=x.x.x.x:,x.x.x.x:,x.x.x.x:
> I am  sending 10 messages of each having size 4096
> error thrown by perf command:
> 4944 records sent, 988.6 records/sec (3.86 MB/sec), 31.5 ms avg latency, 
> 433.0 max latency.
> 5061 records sent, 1012.0 records/sec (3.95 MB/sec), 67.7 ms avg latency, 
> 798.0 max latency.
> 5001 records sent, 1000.0 records/sec (3.91 MB/sec), 49.0 ms avg latency, 
> 503.0 max latency.
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 37.3 ms avg latency, 
> 594.0 max latency.
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 32.6 ms avg latency, 
> 501.0 max latency.
> 5000 records sent, 999.8 records/sec (3.91 MB/sec), 49.4 ms avg latency, 
> 516.0 max latency.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> truncated
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 33.9 ms avg latency, 
> 497.0 max latency.
> 4928 records sent, 985.6 records/sec (3.85 MB/sec), 42.1 ms avg latency, 
> 521.0 max latency.
> 5073 records sent, 1014.4 records/sec (3.96 MB/sec), 39.4 ms avg latency, 
> 418.0 max latency.
> 10 records sent, 999.950002 records/sec (3.91 MB/sec), 37.65 ms avg 
> latency, 798.00 ms max latency, 1 ms 50th, 260 ms 95th, 411 ms 99th, 571 ms 
> 99.9th.
> **consumer command:
> $ bin/kafka-console-consumer.sh --zookeeper 
> x.x.x.x:2181/dcos-service-kafka-framework --topic  test3R3P3  
> 1>~/kafka_output.log
> message stored:
> $ wc -l ~/kafka_output.log
> 99932 /home/montana/kafka_output.log
> I found only 99932 message are stored and 68 messages are lost.
> **topic describe command:
>  $ bin/kafka-topics.sh  --zookeeper x.x.x.x:2181/dcos-service-kafka-framework 
> --describe |grep test3R3
> Topic:test3R3P3 PartitionCount:3ReplicationFactor:3 Configs:
> Topic: test3R3P3Partition: 0Leader: 2   Replicas: 
> 1,2,0 Isr: 2,0,1
> Topic: test3R3P3Partition: 1Leader: 2   Replicas: 
> 2,0,1 Isr: 2,0,1
> Topic: test3R3P3Partition: 2Leader: 0   Replicas: 
> 0,1,2 Isr: 2,0,1
> **consumer group command:
> $  bin/kafka-consumer-groups.sh --zookeeper 
> x.x.x.x:2181/dcos-service-kafka-framework --describe --group 
> console-consumer-9926
> GROUP  TOPIC  PARTITION  
> CURRENT-OFFSET  LOG-END-OFFSET  LAG OWNER
> console-consumer-9926  test3R3P3  0  
> 33265   33265   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> console-consumer-9926  test3R3P3  1  
> 4   4   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> console-consumer-9926  test3R3P3  2  
> 3   3   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> could you please help me understand what this error means "err - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received."?
> Could you please provide suggestion to fix this issue?
> we are seeing this behavior every-time we perform above test-scenario.
> my understanding is, there should not any data loss till n-1 broker is alive. 
> is message loss is an 

[jira] [Commented] (KAFKA-4622) KafkaConsumer does not properly handle authorization errors from offset fetches

2017-01-13 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821415#comment-15821415
 ] 

Ewen Cheslack-Postava commented on KAFKA-4622:
--

[~hachikuji] You filed this 2 days ago and marked for 0.10.2.0 -- are you 
actually trying to get a patch in for this before release? It'd be fair to 
include in post feature-freeze fixes, but I want to get an idea of whether this 
is truly prioritized for the release (i.e. going to get a patch + committed in 
next couple of weeks) or if this is just aspirational.

> KafkaConsumer does not properly handle authorization errors from offset 
> fetches
> ---
>
> Key: KAFKA-4622
> URL: https://issues.apache.org/jira/browse/KAFKA-4622
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.1, 0.10.1.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.2.0
>
>
> It's possible to receive both group and topic authorization exceptions from 
> an OffsetFetch, but the consumer currently treats this as generic unexpected 
> errors. We should probably return {{GroupAuthorizationException}} and 
> {{TopicAuthorizationException}} to be consistent with the other consumer 
> APIs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)