[jira] [Commented] (KAFKA-2079) Support exhibitor

2015-09-24 Thread Helena Edelson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906141#comment-14906141
 ] 

Helena Edelson commented on KAFKA-2079:
---

[~joestein] Is there a ticket for the dependency work 'separating out the meta 
data storage and consensus service (async watchers and leader election)' yet? I 
am unable to find - or perhaps this is still in the discussion phase...

> Support exhibitor
> -
>
> Key: KAFKA-2079
> URL: https://issues.apache.org/jira/browse/KAFKA-2079
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Aaron Dixon
>
> Exhibitor (https://github.com/Netflix/exhibitor) is a discovery/monitoring 
> solution for managing Zookeeper clusters. It supports use cases like 
> discovery, node replacements and auto-scaling of Zk cluster hosts (so you 
> don't have to manage a fixed set of Zk hosts--especially useful in cloud 
> environments.)
> The easiest way for Kafka to support connection to Zk clusters via exhibitor 
> is to use curator as its client. There is already a separate ticket for this: 
> KAFKA-873



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2554: change 0.8.3 to 0.9.0 in ApiVersio...

2015-09-24 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/237

KAFKA-2554: change 0.8.3 to 0.9.0 in ApiVersion and other files

Updated the version from 0.8.3 to 0.9.0. in ApiVersion.  Also updated in 
gradle.propeties.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka KAFKA-2554

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/237.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #237


commit f5d60414db2a4fd24b1c562c7d2d59e341fce3db
Author: Manikumar reddy O 
Date:   2015-09-24T11:03:31Z

change 0.8.3 to 0.9.0 in ApiVersion and other files




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-2569) Kafka should write its metrics to a Kafka topic

2015-09-24 Thread UTKARSH BHATNAGAR (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906189#comment-14906189
 ] 

UTKARSH BHATNAGAR edited comment on KAFKA-2569 at 9/24/15 10:45 AM:


[~otis] & [~wushujames] - This is the use case which I faced a couple of months 
ago. So, I implemented JMXTrans KafkaWriter to send JMX Metrics to Kafka. Here 
is the link:

https://github.com/jmxtrans/jmxtrans/tree/master/jmxtrans-output/jmxtrans-output-kafka

So, just install JMXTrans on Kafka instances and send Kafka metrics to any 
Kafka(itself or another one). Hope this helps. Please let me know if there are 
questions.


was (Author: utkarshcmu):
[~otis] & [~wushujames] - This is the use case which I faced a couple of months 
ago. I implemented JMXTrans KafkaWriter to send JMX Metrics to Kafka. Here is 
the link:

https://github.com/jmxtrans/jmxtrans/tree/master/jmxtrans-output/jmxtrans-output-kafka

So, just install JMXTrans on Kafka instances and send Kafka metrics to any 
Kafka(itself or another one). Hope this helps. Please let me know if there are 
questions.

> Kafka should write its metrics to a Kafka topic
> ---
>
> Key: KAFKA-2569
> URL: https://issues.apache.org/jira/browse/KAFKA-2569
> Project: Kafka
>  Issue Type: New Feature
>Reporter: James Cheng
>
> Kafka is often used to hold and transport monitoring data.
> In order to monitor Kafka itself, Kafka currently exposes many metrics via 
> JMX, which require using a tool to pull the JMX metrics, and then write them 
> to the monitoring system.
> It would be convenient if Kafka could simply send its metrics to a Kafka 
> topic. This would make most sense if the Kafka topic was in a different Kafka 
> cluster, but could still be useful even if it was sent to a topic in the same 
> Kafka cluster.
> Of course, if sent to the same cluster, it would not be accessible if the 
> cluster itself was down.
> This would allow monitoring of Kafka itself without requiring people to set 
> up their own JMX-to-monitoring-system pipelines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2569) Kafka should write its metrics to a Kafka topic

2015-09-24 Thread UTKARSH BHATNAGAR (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906189#comment-14906189
 ] 

UTKARSH BHATNAGAR commented on KAFKA-2569:
--

[~otis] & [~wushujames] - This is the use case which I faced a couple of months 
ago. I implemented JMXTrans KafkaWriter to send JMX Metrics to Kafka. Here is 
the link:

https://github.com/jmxtrans/jmxtrans/tree/master/jmxtrans-output/jmxtrans-output-kafka

So, just install JMXTrans on Kafka instances and send Kafka metrics to any 
Kafka(itself or another one). Hope this helps. Please let me know if there are 
questions.

> Kafka should write its metrics to a Kafka topic
> ---
>
> Key: KAFKA-2569
> URL: https://issues.apache.org/jira/browse/KAFKA-2569
> Project: Kafka
>  Issue Type: New Feature
>Reporter: James Cheng
>
> Kafka is often used to hold and transport monitoring data.
> In order to monitor Kafka itself, Kafka currently exposes many metrics via 
> JMX, which require using a tool to pull the JMX metrics, and then write them 
> to the monitoring system.
> It would be convenient if Kafka could simply send its metrics to a Kafka 
> topic. This would make most sense if the Kafka topic was in a different Kafka 
> cluster, but could still be useful even if it was sent to a topic in the same 
> Kafka cluster.
> Of course, if sent to the same cluster, it would not be accessible if the 
> cluster itself was down.
> This would allow monitoring of Kafka itself without requiring people to set 
> up their own JMX-to-monitoring-system pipelines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2554) change 0.8.3 to 0.9.0 in ApiVersion

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906203#comment-14906203
 ] 

ASF GitHub Bot commented on KAFKA-2554:
---

GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/237

KAFKA-2554: change 0.8.3 to 0.9.0 in ApiVersion and other files

Updated the version from 0.8.3 to 0.9.0. in ApiVersion.  Also updated in 
gradle.propeties.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka KAFKA-2554

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/237.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #237


commit f5d60414db2a4fd24b1c562c7d2d59e341fce3db
Author: Manikumar reddy O 
Date:   2015-09-24T11:03:31Z

change 0.8.3 to 0.9.0 in ApiVersion and other files




> change 0.8.3 to 0.9.0 in ApiVersion
> ---
>
> Key: KAFKA-2554
> URL: https://issues.apache.org/jira/browse/KAFKA-2554
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since we are renaming 0.8.3 to 0.9.0, we need to change the version in 
> ApiVersion accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2554) change 0.8.3 to 0.9.0 in ApiVersion

2015-09-24 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy reassigned KAFKA-2554:
--

Assignee: Manikumar Reddy

> change 0.8.3 to 0.9.0 in ApiVersion
> ---
>
> Key: KAFKA-2554
> URL: https://issues.apache.org/jira/browse/KAFKA-2554
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since we are renaming 0.8.3 to 0.9.0, we need to change the version in 
> ApiVersion accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Producer becomes slow over time

2015-09-24 Thread Prabhjot Bharaj
Hi,

I would like to dig deep into this issue. I've changed log4j.properties for
logging in ALL mode in all places. I am getting lost in the logs.

Any pointers would be welcome

Please let me know if you would need any information regarding this

Thanks,
Prabhjot

On Wed, Sep 23, 2015 at 6:46 PM, Prabhjot Bharaj 
wrote:

> Hello Folks,
>
> I've noticed that 2 producer machines, that I had configured, have become
> very slow over time
> They are giving 17-19 MB/s
>
> But, a producer that I setup today is giving 70MB/s as the write throughput
>
> If I see the contents of bin, libs, config directories, nothing is
> different in the files on any of the producer machines.
>
> Producer is running on the kafka machines itself
>
> Request your expertise
>
> Regards,
> Prabhjot
>
>
>


-- 
-
"There are only 10 types of people in the world: Those who understand
binary, and those who don't"


Re: Producer becomes slow over time

2015-09-24 Thread Helleren, Erik
What happens when the new producer that is getting 70 MB/s is started on a
machine that is not part of the kafka cluster?

Can you include your topic description/configuration, producer
configuration, and broker configuration?

On 9/24/15, 1:44 AM, "Prabhjot Bharaj"  wrote:

>Hi,
>
>I would like to dig deep into this issue. I've changed log4j.properties
>for
>logging in ALL mode in all places. I am getting lost in the logs.
>
>Any pointers would be welcome
>
>Please let me know if you would need any information regarding this
>
>Thanks,
>Prabhjot
>
>On Wed, Sep 23, 2015 at 6:46 PM, Prabhjot Bharaj 
>wrote:
>
>> Hello Folks,
>>
>> I've noticed that 2 producer machines, that I had configured, have
>>become
>> very slow over time
>> They are giving 17-19 MB/s
>>
>> But, a producer that I setup today is giving 70MB/s as the write
>>throughput
>>
>> If I see the contents of bin, libs, config directories, nothing is
>> different in the files on any of the producer machines.
>>
>> Producer is running on the kafka machines itself
>>
>> Request your expertise
>>
>> Regards,
>> Prabhjot
>>
>>
>>
>
>
>-- 
>-
>"There are only 10 types of people in the world: Those who understand
>binary, and those who don't"



[jira] [Comment Edited] (KAFKA-2561) Optionally support OpenSSL for SSL/TLS

2015-09-24 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902896#comment-14902896
 ] 

Ismael Juma edited comment on KAFKA-2561 at 9/24/15 2:42 PM:
-

In order to implement this properly (as opposed to a simple test), the 
following steps are needed:

1. Add an optional build dependency on netty-tcnative. This library contains a 
fork of tomcat native that is available in Maven and is available for major 
platforms (Linux, OS X, Windows). It also handles extracting the 
platform-specific JNI code at runtime (similar to snappy-java). apr and openssl 
need to be installed separately.
2. Provide an implementation of `SSLEngine` based on OpenSSL. The easy option 
would be to add an optional dependency on `netty-handler`, which includes this. 
If this is not acceptable, there are some alternatives like extracting the code 
into a separate library or copying it into Kafka.
3. Add a way to configure the `SSLEngine` implementation (OpenSSL or JDK).
4. Change `SSLFactory` to build the appropriate `SSLEngine` based on the 
configuration added in `3`.
5. Potentially introduce a runtime mechanism to select `OpenSslEngine` by 
default if the required libraries are present (since it's much faster)
6. Potentially update `SSLTransportLayer` to handle differences in behaviour 
between the different `SSLEngine` implementations (the need for this depends on 
whether we the issues reported to Netty are fixed or not). The main one is that 
`OpenSslEngine.unwrap` consumes multiple SSL records (instead of just one) and 
it may produce a different number of SSL records (if they don't all fit into 
the application buffer).
7. Use `allocateDirect` to allocate the buffers in `SSLTransportLayer` when 
using `OpenSslEngine` to avoid copies on each `wrap` and `unwrap` call.
8. Design and implement the story around the formats for keys, certificates, 
key chains and certificate chains supported. OpenSSL doesn't understand the JKS 
format since it's Java-specific. Netty uses the `PKCS#8` format for keys and 
PEM format for chains when the OpenSSL engine is used.
9. Update tests to test all `SSLEngine` implementations.

Testing of this is more complicated than usual due to the native code aspect 
and we would have to test it in all of our supported platforms.

Given the work that I've already done, it would probably take a couple of weeks 
to agree on the details and implement the code (including unit tests). Maybe 
another week for testing on the various platforms.



was (Author: ijuma):
In order to implement this properly (as opposed to a simple test), the 
following steps are needed:

1. Add an optional build dependency on netty-tcnative. This library contains a 
fork of tomcat native that is available in Maven and is available for major 
platforms (Linux, OS X, Windows). It also handles extracting the 
platform-specific JNI code at runtime (similar to snappy-java). apr and openssl 
need to be installed separately.
2. Provide an implementation of `SSLEngine` based on OpenSSL. The easy option 
would be to add an optional dependency on `netty-handler`, which includes this. 
If this is not acceptable, there are some alternatives like extracting the code 
into a separate library or copying it into Kafka.
3. Add a way to configure the `SSLEngine` implementation (OpenSSL or JDK).
4. Change `SSLFactory` to build the appropriate `SSLEngine` based on the 
configuration added in `3`.
5. Potentially introduce a runtime mechanism to select `OpenSslEngine` by 
default if the required libraries are present (since it's much faster)
6. Potentially update `SSLTransportLayer` to handle differences in behaviour 
between the different `SSLEngine` implementations (the need for this depends on 
whether we the issues reported to Netty are fixed or not). In addition to the 
two issues mentioned in the description, there is also an issue related to the 
size of the `applicationReadBuffer`.
7. Use `allocateDirect` to allocate the buffers in `SSLTransportLayer` when 
using `OpenSslEngine` to avoid copies on each `wrap` and `unwrap` call.
8. Design and implement the story around the formats for keys, certificates, 
key chains and certificate chains supported. OpenSSL doesn't understand the JKS 
format since it's Java-specific. Netty uses the `PKCS#8` format for keys and 
PEM format for chains when the OpenSSL engine is used.
9. Update tests to test all `SSLEngine` implementations.

Testing of this is more complicated than usual due to the native code aspect 
and we would have to test it in all of our supported platforms.

Given the work that I've already done, it would probably take a couple of weeks 
to agree on the details and implement the code (including unit tests). Maybe 
another week for testing on the various platforms.


> Optionally support OpenSSL for SSL/TLS 
> ---
>
> 

[jira] [Commented] (KAFKA-2573) Mirror maker system test hangs and eventually fails

2015-09-24 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906454#comment-14906454
 ] 

Ismael Juma commented on KAFKA-2573:


I think the test should fail fast indeed. The system tests are meant to test 
the system working together and if one part is failing (for whatever reason), 
we should know about it. This is one of the main benefits of system tests when 
compared to the JUnit tests we have.

> Mirror maker system test hangs and eventually fails
> ---
>
> Key: KAFKA-2573
> URL: https://issues.apache.org/jira/browse/KAFKA-2573
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Due to changes made in KAFKA-2015, handling of {{--consumer.config}} has 
> changed, more details is specified on KAFKA-2467. This leads to the exception.
> {code}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
>   at kafka.utils.Pool.keys(Pool.scala:77)
>   at 
> kafka.consumer.FetchRequestAndResponseStatsRegistry$.removeConsumerFetchRequestAndResponseStats(FetchRequestAndResponseStats.scala:69)
>   at 
> kafka.metrics.KafkaMetricsGroup$.removeAllConsumerMetrics(KafkaMetricsGroup.scala:189)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:200)
>   at kafka.consumer.OldConsumer.stop(BaseConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:98)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:57)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:41)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2573) Mirror maker system test hangs and eventually fails

2015-09-24 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906454#comment-14906454
 ] 

Ismael Juma edited comment on KAFKA-2573 at 9/24/15 2:56 PM:
-

I think the test should fail fast indeed. The system tests are meant to test 
the system working together and if one part is failing (for whatever reason), 
we should know about it. This is one of the main benefits of system tests when 
compared to the JUnit tests we have (which isolate things more).


was (Author: ijuma):
I think the test should fail fast indeed. The system tests are meant to test 
the system working together and if one part is failing (for whatever reason), 
we should know about it. This is one of the main benefits of system tests when 
compared to the JUnit tests we have.

> Mirror maker system test hangs and eventually fails
> ---
>
> Key: KAFKA-2573
> URL: https://issues.apache.org/jira/browse/KAFKA-2573
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Due to changes made in KAFKA-2015, handling of {{--consumer.config}} has 
> changed, more details is specified on KAFKA-2467. This leads to the exception.
> {code}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
>   at kafka.utils.Pool.keys(Pool.scala:77)
>   at 
> kafka.consumer.FetchRequestAndResponseStatsRegistry$.removeConsumerFetchRequestAndResponseStats(FetchRequestAndResponseStats.scala:69)
>   at 
> kafka.metrics.KafkaMetricsGroup$.removeAllConsumerMetrics(KafkaMetricsGroup.scala:189)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:200)
>   at kafka.consumer.OldConsumer.stop(BaseConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:98)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:57)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:41)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2561) Optionally support OpenSSL for SSL/TLS

2015-09-24 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902896#comment-14902896
 ] 

Ismael Juma edited comment on KAFKA-2561 at 9/24/15 2:37 PM:
-

In order to implement this properly (as opposed to a simple test), the 
following steps are needed:

1. Add an optional build dependency on netty-tcnative. This library contains a 
fork of tomcat native that is available in Maven and is available for major 
platforms (Linux, OS X, Windows). It also handles extracting the 
platform-specific JNI code at runtime (similar to snappy-java). apr and openssl 
need to be installed separately.
2. Provide an implementation of `SSLEngine` based on OpenSSL. The easy option 
would be to add an optional dependency on `netty-handler`, which includes this. 
If this is not acceptable, there are some alternatives like extracting the code 
into a separate library or copying it into Kafka.
3. Add a way to configure the `SSLEngine` implementation (OpenSSL or JDK).
4. Change `SSLFactory` to build the appropriate `SSLEngine` based on the 
configuration added in `3`.
5. Potentially introduce a runtime mechanism to select `OpenSslEngine` by 
default if the required libraries are present (since it's much faster)
6. Potentially update `SSLTransportLayer` to handle differences in behaviour 
between the different `SSLEngine` implementations (the need for this depends on 
whether we the issues reported to Netty are fixed or not). In addition to the 
two issues mentioned in the description, there is also an issue related to the 
size of the `applicationReadBuffer`.
7. Use `allocateDirect` to allocate the buffers in `SSLTransportLayer` when 
using `OpenSslEngine` to avoid copies on each `wrap` and `unwrap` call.
8. Design and implement the story around the formats for keys, certificates, 
key chains and certificate chains supported. OpenSSL doesn't understand the JKS 
format since it's Java-specific. Netty uses the `PKCS#8` format for keys and 
PEM format for chains when the OpenSSL engine is used.
9. Update tests to test all `SSLEngine` implementations.

Testing of this is more complicated than usual due to the native code aspect 
and we would have to test it in all of our supported platforms.

Given the work that I've already done, it would probably take a couple of weeks 
to agree on the details and implement the code (including unit tests). Maybe 
another week for testing on the various platforms.



was (Author: ijuma):
In order to implement this properly (as opposed to a simple test), the 
following steps are needed:

1. Add an optional build dependency on netty-tcnative. This library contains a 
fork of tomcat native that is available in Maven and is available for major 
platforms (Linux, OS X, Windows). It also handles extracting the 
platform-specific JNI code at runtime (similar to snappy-java). apr and openssl 
need to be installed separately.
2. Provide an implementation of `SSLEngine` based on OpenSSL. The easy option 
would be to add an optional dependency on `netty-handler`, which includes this. 
If this is not acceptable, there are some alternatives like extracting the code 
into a separate library or copying it into Kafka.
3. Add a way to configure the `SSLEngine` implementation (OpenSSL or JDK).
4. Change `SSLFactory` to build the appropriate `SSLEngine` based on the 
configuration added in `3`.
5. Potentially introduce a runtime mechanism to select `OpenSslEngine` by 
default if the required libraries are present (since it's much faster)
6. Potentially update `SSLTransportLayer` to handle differences in behaviour 
between the different `SSLEngine` implementations (the need for this depends on 
whether we the issues reported to Netty are fixed or not). In addition to the 
two issues mentioned in the description, there is also an issue related to the 
size of the `applicationReadBuffer`.
7. Design and implement the story around the formats for keys, certificates, 
key chains and certificate chains supported. OpenSSL doesn't understand the JKS 
format since it's Java-specific. Netty uses the `PKCS#8` format for keys and 
PEM format for chains when the OpenSSL engine is used.
8. Update tests to test all `SSLEngine` implementations.

Testing of this is more complicated than usual due to the native code aspect 
and we would have to test it in all of our supported platforms.

Given the work that I've already done, it would probably take a couple of weeks 
to agree on the details and implement the code (including unit tests). Maybe 
another week for testing on the various platforms.


> Optionally support OpenSSL for SSL/TLS 
> ---
>
> Key: KAFKA-2561
> URL: https://issues.apache.org/jira/browse/KAFKA-2561
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.9.0.0
>  

Wrappers for Kafka APIs

2015-09-24 Thread Karthikeyan Annamalai
Hello there,

Hope you all having great time with kafka. I am just about to start with. I
am looking for a API that act as a Connection manager in my app. Say,
getting a producer instance and publishing. Getting all possible broker
list and so on. I know we have snippet for all in the document. But just
wanted to ensure if anything that combines and act as a manger is exists.
If so please share the repo.

If not, Lets build one. your ideas/Suggestions are appreciated on this.

Thanks,
Karthikeyan


[jira] [Commented] (KAFKA-2576) ConsumerPerformance hangs when SSL enabled for Multi-Partition Topic

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906592#comment-14906592
 ] 

ASF GitHub Bot commented on KAFKA-2576:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/236


> ConsumerPerformance hangs when SSL enabled for Multi-Partition Topic
> 
>
> Key: KAFKA-2576
> URL: https://issues.apache.org/jira/browse/KAFKA-2576
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Running the ConsumerPerformance using a multi partition topic causes it to 
> hang (or execute with no results).
> bin/kafka-topics.sh --create --zookeeper server:2181 --replication-factor 1 
> --partitions 50  --topic 50p
> bin/kafka-producer-perf-test.sh --broker-list server:9092 --topic 50p  
> --new-producer --messages 100 --message-size 1000
> #Works ok
> bin/kafka-consumer-perf-test.sh  --broker-list server:9092  --messages 
> 100  --new-consumer --topic 50p 
> #Hangs
> bin/kafka-consumer-perf-test.sh  --broker-list server:9093  --messages 
> 100  --new-consumer --topic 50p --consumer.config ssl.properties
> Running the same without SSL enabled works as expected.  
> Running the same using a single partition topic works as expected.  
> Tested locally and on EC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-2548) kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0

2015-09-24 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reopened KAFKA-2548:


I was just testing the fix to the script.

> kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0
> -
>
> Key: KAFKA-2548
> URL: https://issues.apache.org/jira/browse/KAFKA-2548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>  Labels: newbie
> Fix For: 0.9.0.0
>
>
> Trying to update JIRA where the fix version is set to '0.9.0.0', I get the 
> following error:
> {code}
> Traceback (most recent call last):
>   File "kafka-merge-pr.py", line 474, in 
> main()
>   File "kafka-merge-pr.py", line 460, in main
> resolve_jira_issues(commit_title, merged_refs, jira_comment)
>   File "kafka-merge-pr.py", line 317, in resolve_jira_issues
> resolve_jira_issue(merge_branches, comment, jira_id)
>   File "kafka-merge-pr.py", line 285, in resolve_jira_issue
> (major, minor, patch) = v.split(".")
> ValueError: too many values to unpack
> {code}
> We need to handle 3 and 4 part versions (x.y.z and x.y.z.w)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2548) kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0

2015-09-24 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2548.

Resolution: Fixed

Issue resolved by pull request 6
[https://github.com/ijuma/kafka/pull/6]

> kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0
> -
>
> Key: KAFKA-2548
> URL: https://issues.apache.org/jira/browse/KAFKA-2548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>  Labels: newbie
> Fix For: 0.9.0.0
>
>
> Trying to update JIRA where the fix version is set to '0.9.0.0', I get the 
> following error:
> {code}
> Traceback (most recent call last):
>   File "kafka-merge-pr.py", line 474, in 
> main()
>   File "kafka-merge-pr.py", line 460, in main
> resolve_jira_issues(commit_title, merged_refs, jira_comment)
>   File "kafka-merge-pr.py", line 317, in resolve_jira_issues
> resolve_jira_issue(merge_branches, comment, jira_id)
>   File "kafka-merge-pr.py", line 285, in resolve_jira_issue
> (major, minor, patch) = v.split(".")
> ValueError: too many values to unpack
> {code}
> We need to handle 3 and 4 part versions (x.y.z and x.y.z.w)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [Discussion] KIP-34 Add Partitioner Change Listener to Partitioner Interface

2015-09-24 Thread Bhavesh Mistry
HI Becket,

Thanks for answering and  providing feedback.  I will withdraw KIP and
put into rejected section.

Thanks,

Bhavesh

On Mon, Sep 21, 2015 at 9:53 AM, Jiangjie Qin  wrote:
> Hey Bhavesh,
>
> I kind of think this metadata change capture logic should be implemented by
> each user by themselves for the following reasons:
>
> 1. Most user do not really care about partition change. Adding the
> logic/interface to default partitioner means for users who don't care about
> the partition change, they are paying the price of making a cluster diff
> for each metadata update. For a big cluster, this metadata diff could be
> costly depending on how frequent the metadata is refreshed.
>
> 2. In some cases, user might only care about partition change for some
> specific topic, in that case, there is no need to do a cluster diff for all
> the topics a producer is producing data to. If the cluster diff is
> implemented in user code, it would be more efficient because user can only
> check the topic they are interested. Also, different users might care about
> different changes in the metadata, e.g. topic create/delete/node change,
> etc. So it seems better to leave the actual metadata change capture logic
> to user instead of doing it in the producer.
>
> 3. The cluster diff code itself is short and not complicated so even if
> user do it on their own it should be simple. e.g.:
> {
>   if (this.cachedCluster.hashCode() != cluster.hashCode())
> for (String topic : cluster.topics()) {
>   if (this.cachedCluster.hashCode().contains(topic) &&
>   this.cachedCluster.partitionsForTopic(topic).partition() !=
> cluster.partitionsForTopic(topic).partition())
>   // handle partition change.
> }
> }
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Sep 21, 2015 at 9:13 AM, Bhavesh Mistry 
> wrote:
>
>> HI Jiagjie,
>>
>> Thanks for valuable feedback.
>>
>> 1) Thread Coordination for Change of partitions could be issue.
>>
>> I do agree with you that coordination between the application thread
>> and sender thread would be tough one.   The only concern I had was to
>> share the same logic you had described among all the partitioner
>> interface implementation, and let the Kafka framework level take care
>> of doing the diff like you exactly describe
>>
>> In multithreaded environment, the change listener is being called from
>> same thread that just finish the MetaData update will receive.
>>
>> Metadata Listener:
>>
>>
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/Metadata.java#L163
>>
>> producer.send()
>>
>>
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L370
>>
>> But the behavior of onChange() will not be different than what is today.
>>
>> For example,
>>
>> public Future send(ProducerRecord record,
>> Callback callback) {
>>
>> //  Determine partition for message
>>
>> int partition = partition(record, serializedKey, serializedValue,
>> metadata.fetch());
>>
>> /**
>>
>> Metadata update occurs after the application thread determine the
>> partition for given method but before adding message  to record queue
>> the cluster change happened.  So In my opinion behavior is same.
>>
>> ***/RecordAccumulator.RecordAppendResult result =
>> accumulator.append(tp, serializedKey, serializedValue, callback);
>>
>>
>>
>> What do you think of adding the diff logic as you describe to Default
>> Partitioner Implementation or (another implementation class called it
>> Change Partitioner class ) which within partition() method calls
>> onChange() method and whoever care or needs to know can do what they
>> like (Log event, or use that to change partitioning strategy etc).
>>
>> This give ability to share the diff code and not all implementation
>> have to implement diff logic that is main concern.
>>
>>
>> Thanks,
>>
>> Bhavesh
>>
>>
>> On Fri, Sep 18, 2015 at 3:47 PM, Jiangjie Qin 
>> wrote:
>> > Hey Bhavesh,
>> >
>> > I think it is useful to notify the user about the partition change.
>> >
>> > The problem of having a listener in producer is that it is hard to
>> > guarantee the synchronization. For example, consider the following
>> sequence:
>> > 1. producer sender thread refreshes the metadata with partition change.
>> > 2. user thread called send with customized partitioner, the partitioner
>> > decided the partition with new metadata refreshed in step 1.
>> > 3. producer sender thread calls onParitionChange()
>> >
>> > At that point, the message sent in step 2 was sent using the new
>> metadata.
>> > If we update the metadata after invoking onParttitionChange(), it is a
>> > little strange because the metadata has not changed yet.
>> >
>> > Also the metadata refresh can happen in caller thread as well, not sure
>> how
>> > it would work with multiple caller thread.
>> >
>> > I am thinking it 

[GitHub] kafka pull request: KAFKA-2576; ConsumerPerformance hangs when SSL...

2015-09-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/236


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2576) ConsumerPerformance hangs when SSL enabled for Multi-Partition Topic

2015-09-24 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2576:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ConsumerPerformance hangs when SSL enabled for Multi-Partition Topic
> 
>
> Key: KAFKA-2576
> URL: https://issues.apache.org/jira/browse/KAFKA-2576
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ismael Juma
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Running the ConsumerPerformance using a multi partition topic causes it to 
> hang (or execute with no results).
> bin/kafka-topics.sh --create --zookeeper server:2181 --replication-factor 1 
> --partitions 50  --topic 50p
> bin/kafka-producer-perf-test.sh --broker-list server:9092 --topic 50p  
> --new-producer --messages 100 --message-size 1000
> #Works ok
> bin/kafka-consumer-perf-test.sh  --broker-list server:9092  --messages 
> 100  --new-consumer --topic 50p 
> #Hangs
> bin/kafka-consumer-perf-test.sh  --broker-list server:9093  --messages 
> 100  --new-consumer --topic 50p --consumer.config ssl.properties
> Running the same without SSL enabled works as expected.  
> Running the same using a single partition topic works as expected.  
> Tested locally and on EC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2554) change 0.8.3 to 0.9.0 in ApiVersion

2015-09-24 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2554.
-
Resolution: Fixed

Thanks for contributing!

> change 0.8.3 to 0.9.0 in ApiVersion
> ---
>
> Key: KAFKA-2554
> URL: https://issues.apache.org/jira/browse/KAFKA-2554
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since we are renaming 0.8.3 to 0.9.0, we need to change the version in 
> ApiVersion accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2554) change 0.8.3 to 0.9.0 in ApiVersion

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906876#comment-14906876
 ] 

ASF GitHub Bot commented on KAFKA-2554:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/237


> change 0.8.3 to 0.9.0 in ApiVersion
> ---
>
> Key: KAFKA-2554
> URL: https://issues.apache.org/jira/browse/KAFKA-2554
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since we are renaming 0.8.3 to 0.9.0, we need to change the version in 
> ApiVersion accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2554: change 0.8.3 to 0.9.0 in ApiVersio...

2015-09-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/237


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2417) Ducktape tests for SSL/TLS

2015-09-24 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906722#comment-14906722
 ] 

Ismael Juma commented on KAFKA-2417:


Sorry for the delay [~rsivaram], are you still willing to help with this? I had 
a chat with [~granders] and he said he would welcome some help too. :) We 
probably should define what tests would be worth writing and split this issue 
into multiple tasks that are more specific. Perhaps one task could be the one 
you suggested: "run all ducktape tests with SSL-enabled clients and brokers". 
What do you think?

> Ducktape tests for SSL/TLS
> --
>
> Key: KAFKA-2417
> URL: https://issues.apache.org/jira/browse/KAFKA-2417
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The tests should be complementary to the unit/integration tests written as 
> part of KAFKA-1685.
> Things to consider:
> * Upgrade/downgrade to turning on/off SSL
> * Failure testing
> * Expired/revoked certificates
> * Renegotiation
> Some changes to ducktape may be required for upgrade scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #633

2015-09-24 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-1387: Kafka getting stuck creating ephemeral node it has already 
created when two zookeeper sessions are established in a very short period of 
time

[wangguoz] KAFKA-2548; kafka-merge-pr tool fails to update JIRA with fix 
version 0.9.0.0

--
[...truncated 2562 lines...]
kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.api.ConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.network.SocketServerTest > testSSLSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeralRecursive PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral PASSED

kafka.zk.ZKEphemeralTest > testSameSession PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.api.ConsumerTest > testPatternUnsubscription PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.api.ConsumerTest > testGroupConsumption PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ConsumerTest > testPartitionsFor PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ConsumerTest > testSimpleConsumption PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.api.ConsumerTest > testPartitionPauseAndResume PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.api.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.ConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.api.ConsumerTest > testAutoOffsetReset PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.api.ConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ConsumerTest > testCommitMetadata PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.zk.ZKPathTest > 

[jira] [Updated] (KAFKA-2548) kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0

2015-09-24 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2548:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0
> -
>
> Key: KAFKA-2548
> URL: https://issues.apache.org/jira/browse/KAFKA-2548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> Trying to update JIRA where the fix version is set to '0.9.0.0', I get the 
> following error:
> {code}
> Traceback (most recent call last):
>   File "kafka-merge-pr.py", line 474, in 
> main()
>   File "kafka-merge-pr.py", line 460, in main
> resolve_jira_issues(commit_title, merged_refs, jira_comment)
>   File "kafka-merge-pr.py", line 317, in resolve_jira_issues
> resolve_jira_issue(merge_branches, comment, jira_id)
>   File "kafka-merge-pr.py", line 285, in resolve_jira_issue
> (major, minor, patch) = v.split(".")
> ValueError: too many values to unpack
> {code}
> We need to handle 3 and 4 part versions (x.y.z and x.y.z.w)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2548; kafka-merge-pr tool fails to updat...

2015-09-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/238


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-09-24 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2578:
--

 Summary: Client Metadata internal state should be synchronized
 Key: KAFKA-2578
 URL: https://issues.apache.org/jira/browse/KAFKA-2578
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
Priority: Trivial


Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2579) Unauthorized clients should not be able to join groups

2015-09-24 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2579:
--

 Summary: Unauthorized clients should not be able to join groups 
 Key: KAFKA-2579
 URL: https://issues.apache.org/jira/browse/KAFKA-2579
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.9.0.0
Reporter: Jason Gustafson


The JoinGroup authorization is only checked in the response callback which is 
invoked after the request has been forwarded to the ConsumerCoordinator and the 
client has joined the group. This allows unauthorized members to impact the 
rest of the group since the coordinator will assign partitions to them. It 
would be better to check permission and return immediately if the client is 
unauthorized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2390) OffsetOutOfRangeException should contain the Offset and Partition info.

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907074#comment-14907074
 ] 

ASF GitHub Bot commented on KAFKA-2390:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/239


> OffsetOutOfRangeException should contain the Offset and Partition info.
> ---
>
> Key: KAFKA-2390
> URL: https://issues.apache.org/jira/browse/KAFKA-2390
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
>
> Currently when seek() finishes, the offset seek to is not verified and 
> OffsetOutOfRangeException might be thrown later. To let the users take 
> actions when the OffsetOutOfRangeException is thrown, we need to provide more 
> information in the Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2579) Unauthorized clients should not be able to join groups

2015-09-24 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2579:
--

Assignee: Jason Gustafson

> Unauthorized clients should not be able to join groups 
> ---
>
> Key: KAFKA-2579
> URL: https://issues.apache.org/jira/browse/KAFKA-2579
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The JoinGroup authorization is only checked in the response callback which is 
> invoked after the request has been forwarded to the ConsumerCoordinator and 
> the client has joined the group. This allows unauthorized members to impact 
> the rest of the group since the coordinator will assign partitions to them. 
> It would be better to check permission and return immediately if the client 
> is unauthorized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2573) Mirror maker system test hangs and eventually fails

2015-09-24 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907021#comment-14907021
 ] 

Ashish K Singh commented on KAFKA-2573:
---

OK, I have updated the PR to just make the test fail faster.

> Mirror maker system test hangs and eventually fails
> ---
>
> Key: KAFKA-2573
> URL: https://issues.apache.org/jira/browse/KAFKA-2573
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Due to changes made in KAFKA-2015, handling of {{--consumer.config}} has 
> changed, more details is specified on KAFKA-2467. This leads to the exception.
> {code}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
>   at kafka.utils.Pool.keys(Pool.scala:77)
>   at 
> kafka.consumer.FetchRequestAndResponseStatsRegistry$.removeConsumerFetchRequestAndResponseStats(FetchRequestAndResponseStats.scala:69)
>   at 
> kafka.metrics.KafkaMetricsGroup$.removeAllConsumerMetrics(KafkaMetricsGroup.scala:189)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:200)
>   at kafka.consumer.OldConsumer.stop(BaseConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:98)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:57)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:41)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #634

2015-09-24 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2372) Copycat distributed config storage

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907432#comment-14907432
 ] 

ASF GitHub Bot commented on KAFKA-2372:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/241

KAFKA-2372: Add Kafka-backed storage of Copycat configs.

This also adds some other needed infrastructure for distributed Copycat, 
most
importantly the DistributedHerder, and refactors some code for handling
Kafka-backed logs into KafkaBasedLog since this is shared betweeen offset 
and
config storage.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2372-copycat-distributed-config

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/241.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #241


commit 22c1b1ea361e27e69f040445b99cb142d960a233
Author: Ewen Cheslack-Postava 
Date:   2015-09-21T01:07:20Z

KAFKA-2372: Add Kafka-backed storage of Copycat configs.

This also adds some other needed infrastructure for distributed Copycat, 
most
importantly the DistributedHerder, and refactors some code for handling
Kafka-backed logs into KafkaBasedLog since this is shared betweeen offset 
and
config storage.




> Copycat distributed config storage
> --
>
> Key: KAFKA-2372
> URL: https://issues.apache.org/jira/browse/KAFKA-2372
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add a config storage mechanism to Copycat that works in distributed mode. 
> Copycat workers that start in distributed mode should use this implementation 
> by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2579; prevent unauthorized clients from ...

2015-09-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/240


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2372: Add Kafka-backed storage of Copyca...

2015-09-24 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/241

KAFKA-2372: Add Kafka-backed storage of Copycat configs.

This also adds some other needed infrastructure for distributed Copycat, 
most
importantly the DistributedHerder, and refactors some code for handling
Kafka-backed logs into KafkaBasedLog since this is shared betweeen offset 
and
config storage.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2372-copycat-distributed-config

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/241.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #241


commit 22c1b1ea361e27e69f040445b99cb142d960a233
Author: Ewen Cheslack-Postava 
Date:   2015-09-21T01:07:20Z

KAFKA-2372: Add Kafka-backed storage of Copycat configs.

This also adds some other needed infrastructure for distributed Copycat, 
most
importantly the DistributedHerder, and refactors some code for handling
Kafka-backed logs into KafkaBasedLog since this is shared betweeen offset 
and
config storage.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2579) Unauthorized clients should not be able to join groups

2015-09-24 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2579.
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 240
[https://github.com/apache/kafka/pull/240]

> Unauthorized clients should not be able to join groups 
> ---
>
> Key: KAFKA-2579
> URL: https://issues.apache.org/jira/browse/KAFKA-2579
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> The JoinGroup authorization is only checked in the response callback which is 
> invoked after the request has been forwarded to the ConsumerCoordinator and 
> the client has joined the group. This allows unauthorized members to impact 
> the rest of the group since the coordinator will assign partitions to them. 
> It would be better to check permission and return immediately if the client 
> is unauthorized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2579) Unauthorized clients should not be able to join groups

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907429#comment-14907429
 ] 

ASF GitHub Bot commented on KAFKA-2579:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/240


> Unauthorized clients should not be able to join groups 
> ---
>
> Key: KAFKA-2579
> URL: https://issues.apache.org/jira/browse/KAFKA-2579
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> The JoinGroup authorization is only checked in the response callback which is 
> invoked after the request has been forwarded to the ConsumerCoordinator and 
> the client has joined the group. This allows unauthorized members to impact 
> the rest of the group since the coordinator will assign partitions to them. 
> It would be better to check permission and return immediately if the client 
> is unauthorized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #637

2015-09-24 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2571; KafkaLog4jAppender dies while specifying acks config

[cshapi] KAFKA-2579; prevent unauthorized clients from joining groups

--
[...truncated 3225 lines...]
kafka.coordinator.ConsumerCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.ConsumerCoordinatorResponseTest > 
testJoinGroupUnknownPartitionAssignmentStrategy PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.coordinator.ConsumerCoordinatorResponseTest > 
testJoinGroupInconsistentPartitionAssignmentStrategy PASSED

kafka.coordinator.ConsumerCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.ConsumerCoordinatorResponseTest > testValidJoinGroup PASSED

kafka.coordinator.ConsumerCoordinatorResponseTest > 
testHeartbeatUnknownConsumerExistingGroup PASSED

kafka.coordinator.ConsumerCoordinatorResponseTest > testValidHeartbeat PASSED

kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.api.ConsumerTest > testUnsubscribeTopic PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.api.ConsumerTest > testListTopics PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.api.ConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeralRecursive PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral PASSED

kafka.zk.ZKEphemeralTest > testSameSession PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.network.SocketServerTest > testSSLSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.api.ConsumerTest > testPatternUnsubscription PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.api.ConsumerTest > testGroupConsumption PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.api.ConsumerTest > testPartitionsFor PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.integration.UncleanLeaderElectionTest > 

[jira] [Commented] (KAFKA-2561) Optionally support OpenSSL for SSL/TLS

2015-09-24 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906632#comment-14906632
 ] 

Jiangjie Qin commented on KAFKA-2561:
-

[~ijuma] Just curious, have we found out why the performance differs so much? 
Is it possible that we can tweak some settings of JDK SslEngine to improve the 
performance?

> Optionally support OpenSSL for SSL/TLS 
> ---
>
> Key: KAFKA-2561
> URL: https://issues.apache.org/jira/browse/KAFKA-2561
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>
> JDK's `SSLEngine` is unfortunately a bit slow (KAFKA-2431 covers this in more 
> detail). We should consider supporting OpenSSL for SSL/TLS. Initial 
> experiments on my laptop show that it performs a lot better:
> {code}
> start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, 
> nMsg.sec, config
> 2015-09-21 14:41:58:245, 2015-09-21 14:47:02:583, 28610.2295, 94.0081, 
> 3000, 98574.6111, Java 8u60/server auth JDK 
> SSLEngine/TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
> 2015-09-21 14:38:24:526, 2015-09-21 14:40:19:941, 28610.2295, 247.8900, 
> 3000, 259931.5514, Java 8u60/server auth 
> OpenSslEngine/TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
> 2015-09-21 14:49:03:062, 2015-09-21 14:50:27:764, 28610.2295, 337.7751, 
> 3000, 354182.9000, Java 8u60/plaintext
> {code}
> Extracting the throughput figures:
> * JDK SSLEngine: 94 MB/s
> * OpenSSL SSLEngine: 247 MB/s
> * Plaintext: 337 MB/s (code from trunk, so no zero-copy due to KAFKA-2517)
> In order to get these figures, I used Netty's `OpenSslEngine` by hacking 
> `SSLFactory` to use Netty's `SslContextBuilder` and made a few changes to 
> `SSLTransportLayer` in order to workaround differences in behaviour between 
> `OpenSslEngine` and JDK's SSLEngine (filed 
> https://github.com/netty/netty/issues/4235 and 
> https://github.com/netty/netty/issues/4238 upstream).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-31 - Move to relative offsets in compressed message sets.

2015-09-24 Thread Mayuresh Gharat
+1

On Wed, Sep 23, 2015 at 10:16 PM, Guozhang Wang  wrote:

> +1
>
> On Wed, Sep 23, 2015 at 9:32 PM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
>
> > +1
> >
> > On Wed, Sep 23, 2015 at 8:03 PM, Neha Narkhede 
> wrote:
> >
> > > +1
> > >
> > > On Wed, Sep 23, 2015 at 6:21 PM, Todd Palino 
> wrote:
> > >
> > > > +1000
> > > >
> > > > !
> > > >
> > > > -Todd
> > > >
> > > > On Wednesday, September 23, 2015, Jiangjie Qin
> >  > > >
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Thanks a lot for the reviews and feedback on KIP-31. It looks all
> the
> > > > > concerns of the KIP has been addressed. I would like to start the
> > > voting
> > > > > process.
> > > > >
> > > > > The short summary for the KIP:
> > > > > We are going to use the relative offset in the message format to
> > avoid
> > > > > server side recompression.
> > > > >
> > > > > In case you haven't got a chance to check, here is the KIP link.
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-31+-+Move+to+relative+offsets+in+compressed+message+sets
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Neha
> > >
> >
>
>
>
> --
> -- Guozhang
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[GitHub] kafka pull request: KAFKA-2548; kafka-merge-pr tool fails to updat...

2015-09-24 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/238

KAFKA-2548; kafka-merge-pr tool fails to update JIRA with fix version 
0.9.0.0

Simplified the logic to choose the default fix version. We just hardcode
it for `trunk` and try to compute it based on the branch name for the
rest.

Removed logic that tries to handle forked release branches as it
seems to be specific to how the Spark project handles their JIRA.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2548-merge-pr-tool-4-segment-fix-version

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/238.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #238


commit a2e6787f923fffb092d7f4c880923c501d2a9b9f
Author: Ismael Juma 
Date:   2015-09-24T16:43:14Z

Don't fail on fix versions with 4 segments

Simplified the logic to choose the default fix version. We just hardcode
it for `trunk` and try to compute it based on the branch name for the
rest.

Removed logic that tries to handle forked release branches as it
seems to be specific to how the Spark project handles their JIRA.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2548) kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906638#comment-14906638
 ] 

ASF GitHub Bot commented on KAFKA-2548:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/238

KAFKA-2548; kafka-merge-pr tool fails to update JIRA with fix version 
0.9.0.0

Simplified the logic to choose the default fix version. We just hardcode
it for `trunk` and try to compute it based on the branch name for the
rest.

Removed logic that tries to handle forked release branches as it
seems to be specific to how the Spark project handles their JIRA.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2548-merge-pr-tool-4-segment-fix-version

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/238.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #238


commit a2e6787f923fffb092d7f4c880923c501d2a9b9f
Author: Ismael Juma 
Date:   2015-09-24T16:43:14Z

Don't fail on fix versions with 4 segments

Simplified the logic to choose the default fix version. We just hardcode
it for `trunk` and try to compute it based on the branch name for the
rest.

Removed logic that tries to handle forked release branches as it
seems to be specific to how the Spark project handles their JIRA.




> kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0
> -
>
> Key: KAFKA-2548
> URL: https://issues.apache.org/jira/browse/KAFKA-2548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
>  Labels: newbie
> Fix For: 0.9.0.0
>
>
> Trying to update JIRA where the fix version is set to '0.9.0.0', I get the 
> following error:
> {code}
> Traceback (most recent call last):
>   File "kafka-merge-pr.py", line 474, in 
> main()
>   File "kafka-merge-pr.py", line 460, in main
> resolve_jira_issues(commit_title, merged_refs, jira_comment)
>   File "kafka-merge-pr.py", line 317, in resolve_jira_issues
> resolve_jira_issue(merge_branches, comment, jira_id)
>   File "kafka-merge-pr.py", line 285, in resolve_jira_issue
> (major, minor, patch) = v.split(".")
> ValueError: too many values to unpack
> {code}
> We need to handle 3 and 4 part versions (x.y.z and x.y.z.w)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2548) kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0

2015-09-24 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2548:
---
  Labels:   (was: newbie)
Reviewer: Guozhang Wang
  Status: Patch Available  (was: In Progress)

> kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0
> -
>
> Key: KAFKA-2548
> URL: https://issues.apache.org/jira/browse/KAFKA-2548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> Trying to update JIRA where the fix version is set to '0.9.0.0', I get the 
> following error:
> {code}
> Traceback (most recent call last):
>   File "kafka-merge-pr.py", line 474, in 
> main()
>   File "kafka-merge-pr.py", line 460, in main
> resolve_jira_issues(commit_title, merged_refs, jira_comment)
>   File "kafka-merge-pr.py", line 317, in resolve_jira_issues
> resolve_jira_issue(merge_branches, comment, jira_id)
>   File "kafka-merge-pr.py", line 285, in resolve_jira_issue
> (major, minor, patch) = v.split(".")
> ValueError: too many values to unpack
> {code}
> We need to handle 3 and 4 part versions (x.y.z and x.y.z.w)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2548) kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0

2015-09-24 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2548 started by Ismael Juma.
--
> kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0
> -
>
> Key: KAFKA-2548
> URL: https://issues.apache.org/jira/browse/KAFKA-2548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> Trying to update JIRA where the fix version is set to '0.9.0.0', I get the 
> following error:
> {code}
> Traceback (most recent call last):
>   File "kafka-merge-pr.py", line 474, in 
> main()
>   File "kafka-merge-pr.py", line 460, in main
> resolve_jira_issues(commit_title, merged_refs, jira_comment)
>   File "kafka-merge-pr.py", line 317, in resolve_jira_issues
> resolve_jira_issue(merge_branches, comment, jira_id)
>   File "kafka-merge-pr.py", line 285, in resolve_jira_issue
> (major, minor, patch) = v.split(".")
> ValueError: too many values to unpack
> {code}
> We need to handle 3 and 4 part versions (x.y.z and x.y.z.w)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #632

2015-09-24 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2576; ConsumerPerformance hangs when SSL enabled for 
Multi-Partition Topic

--
[...truncated 932 lines...]
kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.api.ConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.admin.AdminTest > testTopicCreationWithCollision PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSSLSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.api.ConsumerTest > testPatternUnsubscription PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.api.ConsumerTest > testGroupConsumption PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.api.ConsumerTest > testPartitionsFor PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.api.ConsumerTest > testSimpleConsumption PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ConsumerTest > testPartitionPauseAndResume PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.api.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.ConsumerTest > testPartitionReassignmentCallback PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ConsumerTest > testAutoOffsetReset PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.api.ConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.api.ConsumerTest > testCommitMetadata PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED


[GitHub] kafka pull request: KAFKA-1387: Kafka getting stuck creating ephem...

2015-09-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/178


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-09-24 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1387:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Kafka getting stuck creating ephemeral node it has already created when two 
> zookeeper sessions are established in a very short period of time
> -
>
> Key: KAFKA-1387
> URL: https://issues.apache.org/jira/browse/KAFKA-1387
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Fedor Korotkiy
>Assignee: Flavio Junqueira
>Priority: Critical
>  Labels: newbie, patch, zkclient-problems
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1387.patch, kafka-1387.patch
>
>
> Kafka broker re-registers itself in zookeeper every time handleNewSession() 
> callback is invoked.
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
>  
> Now imagine the following sequence of events.
> 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
> the zkClient, but not invoked yet.
> 2) Zookeeper session reestablishes again, queueing callback second time.
> 3) First callback is invoked, creating /broker/[id] ephemeral path.
> 4) Second callback is invoked and it tries to create /broker/[id] path using 
> createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
> already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
> stuck in the infinite loop.
> Seems like controller election code have the same issue.
> I'am able to reproduce this issue on the 0.8.1 branch from github using the 
> following configs.
> # zookeeper
> tickTime=10
> dataDir=/tmp/zk/
> clientPort=2101
> maxClientCnxns=0
> # kafka
> broker.id=1
> log.dir=/tmp/kafka
> zookeeper.connect=localhost:2101
> zookeeper.connection.timeout.ms=100
> zookeeper.sessiontimeout.ms=100
> Just start kafka and zookeeper and then pause zookeeper several times using 
> Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906672#comment-14906672
 ] 

ASF GitHub Bot commented on KAFKA-1387:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/178


> Kafka getting stuck creating ephemeral node it has already created when two 
> zookeeper sessions are established in a very short period of time
> -
>
> Key: KAFKA-1387
> URL: https://issues.apache.org/jira/browse/KAFKA-1387
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Fedor Korotkiy
>Assignee: Flavio Junqueira
>Priority: Critical
>  Labels: newbie, patch, zkclient-problems
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1387.patch, kafka-1387.patch
>
>
> Kafka broker re-registers itself in zookeeper every time handleNewSession() 
> callback is invoked.
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
>  
> Now imagine the following sequence of events.
> 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
> the zkClient, but not invoked yet.
> 2) Zookeeper session reestablishes again, queueing callback second time.
> 3) First callback is invoked, creating /broker/[id] ephemeral path.
> 4) Second callback is invoked and it tries to create /broker/[id] path using 
> createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
> already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
> stuck in the infinite loop.
> Seems like controller election code have the same issue.
> I'am able to reproduce this issue on the 0.8.1 branch from github using the 
> following configs.
> # zookeeper
> tickTime=10
> dataDir=/tmp/zk/
> clientPort=2101
> maxClientCnxns=0
> # kafka
> broker.id=1
> log.dir=/tmp/kafka
> zookeeper.connect=localhost:2101
> zookeeper.connection.timeout.ms=100
> zookeeper.sessiontimeout.ms=100
> Just start kafka and zookeeper and then pause zookeeper several times using 
> Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2571) KafkaLog4jAppender dies while specifying "acks" config

2015-09-24 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2571:
-
Reviewer: Jun Rao

> KafkaLog4jAppender dies while specifying "acks" config
> --
>
> Key: KAFKA-2571
> URL: https://issues.apache.org/jira/browse/KAFKA-2571
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaLog4jAppender specifies "acks" config's value as int, however 
> KafkaProducer expects it as a String.
> Below is the exception that gets thrown.
> {code}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Invalid value -1 for configuration acks: Expected value to be a string, but 
> it was a java.lang.Integer
>   at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:219)
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:172)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:48)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:55)
>   at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:274)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:179)
>   at 
> org.apache.kafka.log4jappender.KafkaLog4jAppender.getKafkaProducer(KafkaLog4jAppender.java:132)
>   at 
> org.apache.kafka.log4jappender.KafkaLog4jAppender.activateOptions(KafkaLog4jAppender.java:126)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
>   at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
>   at 
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
>   at 
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>   at 
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>   at 
> org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:440)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.(VerifiableLog4jAppender.java:141)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.createFromArgs(VerifiableLog4jAppender.java:124)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.main(VerifiableLog4jAppender.java:158)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2571) KafkaLog4jAppender dies while specifying "acks" config

2015-09-24 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2571:

   Resolution: Fixed
Fix Version/s: 0.9.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 231
[https://github.com/apache/kafka/pull/231]

> KafkaLog4jAppender dies while specifying "acks" config
> --
>
> Key: KAFKA-2571
> URL: https://issues.apache.org/jira/browse/KAFKA-2571
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> KafkaLog4jAppender specifies "acks" config's value as int, however 
> KafkaProducer expects it as a String.
> Below is the exception that gets thrown.
> {code}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Invalid value -1 for configuration acks: Expected value to be a string, but 
> it was a java.lang.Integer
>   at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:219)
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:172)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:48)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:55)
>   at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:274)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:179)
>   at 
> org.apache.kafka.log4jappender.KafkaLog4jAppender.getKafkaProducer(KafkaLog4jAppender.java:132)
>   at 
> org.apache.kafka.log4jappender.KafkaLog4jAppender.activateOptions(KafkaLog4jAppender.java:126)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
>   at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
>   at 
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
>   at 
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>   at 
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>   at 
> org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:440)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.(VerifiableLog4jAppender.java:141)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.createFromArgs(VerifiableLog4jAppender.java:124)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.main(VerifiableLog4jAppender.java:158)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2571) KafkaLog4jAppender dies while specifying "acks" config

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907416#comment-14907416
 ] 

ASF GitHub Bot commented on KAFKA-2571:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/231


> KafkaLog4jAppender dies while specifying "acks" config
> --
>
> Key: KAFKA-2571
> URL: https://issues.apache.org/jira/browse/KAFKA-2571
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> KafkaLog4jAppender specifies "acks" config's value as int, however 
> KafkaProducer expects it as a String.
> Below is the exception that gets thrown.
> {code}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Invalid value -1 for configuration acks: Expected value to be a string, but 
> it was a java.lang.Integer
>   at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:219)
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:172)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:48)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:55)
>   at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:274)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:179)
>   at 
> org.apache.kafka.log4jappender.KafkaLog4jAppender.getKafkaProducer(KafkaLog4jAppender.java:132)
>   at 
> org.apache.kafka.log4jappender.KafkaLog4jAppender.activateOptions(KafkaLog4jAppender.java:126)
>   at 
> org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
>   at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
>   at 
> org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
>   at 
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
>   at 
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>   at 
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>   at 
> org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:440)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.(VerifiableLog4jAppender.java:141)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.createFromArgs(VerifiableLog4jAppender.java:124)
>   at 
> org.apache.kafka.clients.tools.VerifiableLog4jAppender.main(VerifiableLog4jAppender.java:158)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2575) inconsistant offset count in replication-offset-checkpoint during lead election leads to huge exceptions

2015-09-24 Thread Warren Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907444#comment-14907444
 ] 

Warren Jin commented on KAFKA-2575:
---

Just updated.

We remove the replication-offset-checkpoint and restart the broker, the issue 
was resolved.

On Sep 21th, one of the broker was down due to the hardware malfunction, I 
think it may be the reason why the replication-offset-checkpoint was corrupted.

What I would suggest is: The Kafka should auto recovery if detects the 
replication-offset-checkpoint was already corrupted.

Thanks,
Warren


> inconsistant offset count in replication-offset-checkpoint during lead 
> election leads to huge exceptions
> 
>
> Key: KAFKA-2575
> URL: https://issues.apache.org/jira/browse/KAFKA-2575
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.8.2.1
>Reporter: Warren Jin
>
> We have 3 brokers, more than 100 topics in production, the default partition 
> number is 24 for each topic, the replication factor is 3.
> We noticed the following errors in recent days.
> 2015-09-22 22:25:12,529 ERROR Error on broker 1 while processing LeaderAndIsr 
> request correlationId 438501 received from controller 2 epoch 12 for 
> partition [LOGIST.DELIVERY.SUBSCRIBE,7] (state.change.logger)
> java.io.IOException: Expected 3918 entries but found only 3904
>   at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:99)
>   at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:91)
>   at 
> kafka.cluster.Partition$$anonfun$makeLeader$1$$anonfun$apply$mcZ$sp$4.apply(Partition.scala:171)
>   at 
> kafka.cluster.Partition$$anonfun$makeLeader$1$$anonfun$apply$mcZ$sp$4.apply(Partition.scala:171)
>   at scala.collection.immutable.Set$Set3.foreach(Set.scala:115)
>   at 
> kafka.cluster.Partition$$anonfun$makeLeader$1.apply$mcZ$sp(Partition.scala:171)
>   at 
> kafka.cluster.Partition$$anonfun$makeLeader$1.apply(Partition.scala:163)
>   at 
> kafka.cluster.Partition$$anonfun$makeLeader$1.apply(Partition.scala:163)
>   at kafka.utils.Utils$.inLock(Utils.scala:535)
>   at kafka.utils.Utils$.inWriteLock(Utils.scala:543)
>   at kafka.cluster.Partition.makeLeader(Partition.scala:163)
>   at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$5.apply(ReplicaManager.scala:427)
>   at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$5.apply(ReplicaManager.scala:426)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>   at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:426)
>   at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:378)
>   at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:120)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:63)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>   at java.lang.Thread.run(Thread.java:745)
> It occurs in LOGIST.DELIVERY.SUBSCRIBE partition election, 
> then it repeatly pring out the error message:
> 2015-09-23 10:20:03 525 WARN [Replica Manager on Broker 1]: Fetch request 
> with correlation id 14943530 from client ReplicaFetcherThread-2-1 on 
> partition [LOGIST.DELIVERY.SUBSCRIBE,22] failed due to Leader not local for 
> partition [LOGIST.DELIVERY.SUBSCRIBE,22] on broker 1 
> (kafka.server.ReplicaManager)
> 2015-09-23 10:20:03 525 WARN [Replica Manager on Broker 1]: Fetch request 
> with correlation id 15022337 from client ReplicaFetcherThread-1-1 on 
> partition [LOGIST.DELIVERY.SUBSCRIBE,1] failed due to Leader not local for 
> partition [LOGIST.DELIVERY.SUBSCRIBE,1] on broker 1 
> (kafka.server.ReplicaManager)
> 2015-09-23 10:20:03 525 WARN [Replica Manager on Broker 1]: Fetch request 
> with correlation id 15078431 from client ReplicaFetcherThread-0-1 on 
> partition [LOGIST.DELIVERY.SUBSCRIBE,4] failed due to Leader not local for 
> partition [LOGIST.DELIVERY.SUBSCRIBE,4] on broker 1 
> (kafka.server.ReplicaManager)
> 2015-09-23 10:20:03 525 WARN [Replica Manager on Broker 1]: Fetch request 
> with correlation id 13477660 from client ReplicaFetcherThread-2-1 on 
> partition [LOGIST.DELIVERY.SUBSCRIBE,10] failed due to Leader not local for 
> partition [LOGIST.DELIVERY.SUBSCRIBE,10] on broker 1 
> (kafka.server.ReplicaManager)
> 2015-09-23 10:20:03 525 WARN [Replica Manager on Broker 1]: Fetch request 
> with correlation id 

[jira] [Commented] (KAFKA-106) Include rolledup segment size stats via jmx

2015-09-24 Thread UTKARSH BHATNAGAR (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907589#comment-14907589
 ] 

UTKARSH BHATNAGAR commented on KAFKA-106:
-

[~junrao] - I would like to submit a PR for this. Can you please assign this to 
me?

Also, I have written KafkaWriter for JMXTrans:
https://github.com/jmxtrans/jmxtrans/tree/master/jmxtrans-output/jmxtrans-output-kafka

> Include rolledup segment size stats via jmx
> ---
>
> Key: KAFKA-106
> URL: https://issues.apache.org/jira/browse/KAFKA-106
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Chris Burroughs
>Priority: Minor
>
> We already have size for a each topic-partition pair.  From the user list it 
> looks like it would be helpful to also include entire size for a topic, and 
> size for all topics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: add unit test for OffsetResetStrategyTest

2015-09-24 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/239

add unit test for OffsetResetStrategyTest



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-2390-followup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/239.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #239


commit 39c4cfb89a04f975d1d23cd5d880c9dc37c4f9eb
Author: Dong Lin 
Date:   2015-09-24T17:46:04Z

add unit test for OffsetResetStrategyTest




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-31 - Move to relative offsets in compressed message sets.

2015-09-24 Thread Joel Koshy
+1 on everything but the upgrade plan, which is a bit scary - will
comment on the discuss thread.

On Thu, Sep 24, 2015 at 9:51 AM, Mayuresh Gharat
 wrote:
> +1
>
> On Wed, Sep 23, 2015 at 10:16 PM, Guozhang Wang  wrote:
>
>> +1
>>
>> On Wed, Sep 23, 2015 at 9:32 PM, Aditya Auradkar <
>> aaurad...@linkedin.com.invalid> wrote:
>>
>> > +1
>> >
>> > On Wed, Sep 23, 2015 at 8:03 PM, Neha Narkhede 
>> wrote:
>> >
>> > > +1
>> > >
>> > > On Wed, Sep 23, 2015 at 6:21 PM, Todd Palino 
>> wrote:
>> > >
>> > > > +1000
>> > > >
>> > > > !
>> > > >
>> > > > -Todd
>> > > >
>> > > > On Wednesday, September 23, 2015, Jiangjie Qin
>> > > > > >
>> > > > wrote:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > Thanks a lot for the reviews and feedback on KIP-31. It looks all
>> the
>> > > > > concerns of the KIP has been addressed. I would like to start the
>> > > voting
>> > > > > process.
>> > > > >
>> > > > > The short summary for the KIP:
>> > > > > We are going to use the relative offset in the message format to
>> > avoid
>> > > > > server side recompression.
>> > > > >
>> > > > > In case you haven't got a chance to check, here is the KIP link.
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-31+-+Move+to+relative+offsets+in+compressed+message+sets
>> > > > >
>> > > > > Thanks,
>> > > > >
>> > > > > Jiangjie (Becket) Qin
>> > > > >
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Thanks,
>> > > Neha
>> > >
>> >
>>
>>
>>
>> --
>> -- Guozhang
>>
>
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125


Re: [DISCUSS] KIP-31 - Message format change proposal

2015-09-24 Thread Joel Koshy
The upgrade plan works, but the potentially long interim phase of
skipping zero-copy for down-conversion could be problematic especially
for large deployments with large consumer fan-out. It is not only
going to be memory overhead but CPU as well - since you need to
decompress, write absolute offsets, then recompress for every v1
fetch. i.e., it may be safer (but obviously more tedious) to have a
multi-step upgrade process. For e.g.,:

1 - Upgrade brokers, but disable the feature. i.e., either reject
producer requests v2 or down-convert to old message format (with
absolute offsets)
2 - Upgrade clients, but they should only use v1 requests
3 - Switch (all or most) consumers to use v2 fetch format (which will
use zero-copy).
4 - Turn on the feature on the brokers to allow producer requests v2
5 - Switch producers to use v2 produce format

(You may want a v1 fetch rate metric and decide to proceed to step 4
only when that comes down to a trickle)

I'm not sure if the prolonged upgrade process is viable in every
scenario. I think it should work at LinkedIn for e.g., but may not for
other environments.

Joel


On Tue, Sep 22, 2015 at 12:55 AM, Jiangjie Qin
 wrote:
> Thanks for the explanation, Jay.
> Agreed. We have to keep the offset to be the offset of last inner message.
>
> Jiangjie (Becket) Qin
>
> On Mon, Sep 21, 2015 at 6:21 PM, Jay Kreps  wrote:
>
>> For (3) I don't think we can change the offset in the outer message from
>> what it is today as it is relied upon in the search done in the log layer.
>> The reason it is the offset of the last message rather than the first is to
>> make the offset a least upper bound (i.e. the smallest offset >=
>> fetch_offset). This needs to work the same for both gaps due to compacted
>> topics and gaps due to compressed messages.
>>
>> So imagine you had a compressed set with offsets {45, 46, 47, 48} if you
>> assigned this compressed set the offset 45 a fetch for 46 would actually
>> skip ahead to 49 (the least upper bound).
>>
>> -Jay
>>
>> On Mon, Sep 21, 2015 at 5:17 PM, Jun Rao  wrote:
>>
>> > Jiangjie,
>> >
>> > Thanks for the writeup. A few comments below.
>> >
>> > 1. We will need to be a bit careful with fetch requests from the
>> followers.
>> > Basically, as we are doing a rolling upgrade of the brokers, the follower
>> > can't start issuing V2 of the fetch request until the rest of the brokers
>> > are ready to process it. So, we probably need to make use of
>> > inter.broker.protocol.version to do the rolling upgrade. In step 1, we
>> set
>> > inter.broker.protocol.version to 0.9 and do a round of rolling upgrade of
>> > the brokers. At this point, all brokers are capable of processing V2 of
>> > fetch requests, but no broker is using it yet. In step 2, we
>> > set inter.broker.protocol.version to 0.10 and do another round of rolling
>> > restart of the brokers. In this step, the upgraded brokers will start
>> > issuing V2 of the fetch request.
>> >
>> > 2. If we do #1, I am not sure if there is still a need for
>> > message.format.version since the broker can start writing messages in the
>> > new format after inter.broker.protocol.version is set to 0.10.
>> >
>> > 3. It wasn't clear from the wiki whether the base offset in the shallow
>> > message is the offset of the first or the last inner message. It's better
>> > to use the offset of the last inner message. This way, the followers
>> don't
>> > have to decompress messages to figure out the next fetch offset.
>> >
>> > 4. I am not sure that I understand the following sentence in the wiki. It
>> > seems that the relative offsets in a compressed message don't have to be
>> > consecutive. If so, why do we need to update the relative offsets in the
>> > inner messages?
>> > "When the log cleaner compacts log segments, it needs to update the inner
>> > message's relative offset values."
>> >
>> > Thanks,
>> >
>> > Jun
>> >
>> > On Thu, Sep 17, 2015 at 12:54 PM, Jiangjie Qin > >
>> > wrote:
>> >
>> > > Hi folks,
>> > >
>> > > Thanks a lot for the feedback on KIP-31 - move to use relative offset.
>> > (Not
>> > > including timestamp and index discussion).
>> > >
>> > > I updated the migration plan section as we discussed on KIP hangout. I
>> > > think it is the only concern raised so far. Please let me know if there
>> > are
>> > > further comments about the KIP.
>> > >
>> > > Thanks,
>> > >
>> > > Jiangjie (Becket) Qin
>> > >
>> > > On Mon, Sep 14, 2015 at 5:13 PM, Jiangjie Qin 
>> wrote:
>> > >
>> > > > I just updated the KIP-33 to explain the indexing on CreateTime and
>> > > > LogAppendTime respectively. I also used some use case to compare the
>> > two
>> > > > solutions.
>> > > > Although this is for KIP-33, but it does give a some insights on
>> > whether
>> > > > it makes sense to have a per message LogAppendTime.
>> > > >
>> > > >
>> > >
>> >
>> 

[GitHub] kafka pull request: KAFKA-2373: Add Kafka-backed offset storage fo...

2015-09-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/202


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2373) Copycat distributed offset storage

2015-09-24 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2373:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 202
[https://github.com/apache/kafka/pull/202]

> Copycat distributed offset storage
> --
>
> Key: KAFKA-2373
> URL: https://issues.apache.org/jira/browse/KAFKA-2373
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add offset storage for Copycat that works in distributed mode, which likely 
> means storing the data in a Kafka topic. Copycat workers will use this by 
> default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2373) Copycat distributed offset storage

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907387#comment-14907387
 ] 

ASF GitHub Bot commented on KAFKA-2373:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/202


> Copycat distributed offset storage
> --
>
> Key: KAFKA-2373
> URL: https://issues.apache.org/jira/browse/KAFKA-2373
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add offset storage for Copycat that works in distributed mode, which likely 
> means storing the data in a Kafka topic. Copycat workers will use this by 
> default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-31 - Message format change proposal

2015-09-24 Thread Jiangjie Qin
Hi Joel,

That is a valid concern. And that is actually why we had the
message.format.version before.

My original thinking was:
1. upgrade the broker to support both V1 and V2 for consumer/producer
request.
2. configure broker to store V1 on the disk. (message.format.version = 1)
3. upgrade the consumer to support both V1 and V2 for consumer request.
4. Meanwhile some producer might also be upgraded to use producer request
V2.
5. At this point, for producer request V2, broker will do down conversion.
Regardless consumers are upgraded or not, broker will always use zero-copy
transfer. Because supposedly both old and upgraded consumer should be able
to understand that.
6. After most of the consumers are upgraded, We set message.format.version
= 1 and only do down conversion for old consumers.

This way we don't need to reject producer request V2. And we always to
version conversion for the minority of the consumers. However I have a few
concerns over this approach, not sure if they actually matters.

A. (5) is not true for now. Today the clients only uses the highest
version, i.e. a producer/consumer wouldn't parse a lower version of
response even the code exist there. I think supposedly, consumer should
stick to one version and broker should do the conversion.
B. Let's say (A) is not a concern, we make all the clients support all the
versions it knows. At step(6), there will be a transitional period that
user will see both messages with new and old version. For KIP-31 only it
might be OK because we are not adding anything into the message. But if the
message has different fields (e.g. KIP-32), that means people will get
those fields from some messages but not from some other messages. Would
that be a problem?

If (A) and (B) are not a problem. Is the above procedure able to address
your concern?

Thanks,

Jiangjie (Becket) Qin

On Thu, Sep 24, 2015 at 6:32 PM, Joel Koshy  wrote:

> The upgrade plan works, but the potentially long interim phase of
> skipping zero-copy for down-conversion could be problematic especially
> for large deployments with large consumer fan-out. It is not only
> going to be memory overhead but CPU as well - since you need to
> decompress, write absolute offsets, then recompress for every v1
> fetch. i.e., it may be safer (but obviously more tedious) to have a
> multi-step upgrade process. For e.g.,:
>
> 1 - Upgrade brokers, but disable the feature. i.e., either reject
> producer requests v2 or down-convert to old message format (with
> absolute offsets)
> 2 - Upgrade clients, but they should only use v1 requests
> 3 - Switch (all or most) consumers to use v2 fetch format (which will
> use zero-copy).
> 4 - Turn on the feature on the brokers to allow producer requests v2
> 5 - Switch producers to use v2 produce format
>
> (You may want a v1 fetch rate metric and decide to proceed to step 4
> only when that comes down to a trickle)
>
> I'm not sure if the prolonged upgrade process is viable in every
> scenario. I think it should work at LinkedIn for e.g., but may not for
> other environments.
>
> Joel
>
>
> On Tue, Sep 22, 2015 at 12:55 AM, Jiangjie Qin
>  wrote:
> > Thanks for the explanation, Jay.
> > Agreed. We have to keep the offset to be the offset of last inner
> message.
> >
> > Jiangjie (Becket) Qin
> >
> > On Mon, Sep 21, 2015 at 6:21 PM, Jay Kreps  wrote:
> >
> >> For (3) I don't think we can change the offset in the outer message from
> >> what it is today as it is relied upon in the search done in the log
> layer.
> >> The reason it is the offset of the last message rather than the first
> is to
> >> make the offset a least upper bound (i.e. the smallest offset >=
> >> fetch_offset). This needs to work the same for both gaps due to
> compacted
> >> topics and gaps due to compressed messages.
> >>
> >> So imagine you had a compressed set with offsets {45, 46, 47, 48} if you
> >> assigned this compressed set the offset 45 a fetch for 46 would actually
> >> skip ahead to 49 (the least upper bound).
> >>
> >> -Jay
> >>
> >> On Mon, Sep 21, 2015 at 5:17 PM, Jun Rao  wrote:
> >>
> >> > Jiangjie,
> >> >
> >> > Thanks for the writeup. A few comments below.
> >> >
> >> > 1. We will need to be a bit careful with fetch requests from the
> >> followers.
> >> > Basically, as we are doing a rolling upgrade of the brokers, the
> follower
> >> > can't start issuing V2 of the fetch request until the rest of the
> brokers
> >> > are ready to process it. So, we probably need to make use of
> >> > inter.broker.protocol.version to do the rolling upgrade. In step 1, we
> >> set
> >> > inter.broker.protocol.version to 0.9 and do a round of rolling
> upgrade of
> >> > the brokers. At this point, all brokers are capable of processing V2
> of
> >> > fetch requests, but no broker is using it yet. In step 2, we
> >> > set inter.broker.protocol.version to 0.10 and do another round of
> rolling
> >> > restart of 

[GitHub] kafka pull request: KAFKA-2579; prevent unauthorized clients from ...

2015-09-24 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/240

KAFKA-2579; prevent unauthorized clients from joining groups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2579

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/240.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #240


commit c2700d06c0e73a0259c254bd1b776c271c86c79e
Author: Jason Gustafson 
Date:   2015-09-25T00:33:26Z

KAFKA-2579; prevent unauthorized clients from joining groups




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2579) Unauthorized clients should not be able to join groups

2015-09-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907308#comment-14907308
 ] 

ASF GitHub Bot commented on KAFKA-2579:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/240

KAFKA-2579; prevent unauthorized clients from joining groups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2579

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/240.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #240


commit c2700d06c0e73a0259c254bd1b776c271c86c79e
Author: Jason Gustafson 
Date:   2015-09-25T00:33:26Z

KAFKA-2579; prevent unauthorized clients from joining groups




> Unauthorized clients should not be able to join groups 
> ---
>
> Key: KAFKA-2579
> URL: https://issues.apache.org/jira/browse/KAFKA-2579
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The JoinGroup authorization is only checked in the response callback which is 
> invoked after the request has been forwarded to the ConsumerCoordinator and 
> the client has joined the group. This allows unauthorized members to impact 
> the rest of the group since the coordinator will assign partitions to them. 
> It would be better to check permission and return immediately if the client 
> is unauthorized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2373) Copycat distributed offset storage

2015-09-24 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907422#comment-14907422
 ] 

Gwen Shapira commented on KAFKA-2373:
-

Thanks for the contribution [~ewencp] and for the review [~wushujames]!

> Copycat distributed offset storage
> --
>
> Key: KAFKA-2373
> URL: https://issues.apache.org/jira/browse/KAFKA-2373
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add offset storage for Copycat that works in distributed mode, which likely 
> means storing the data in a Kafka topic. Copycat workers will use this by 
> default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2372) Copycat distributed config storage

2015-09-24 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2372:
-
Reviewer: Gwen Shapira
  Status: Patch Available  (was: Open)

> Copycat distributed config storage
> --
>
> Key: KAFKA-2372
> URL: https://issues.apache.org/jira/browse/KAFKA-2372
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add a config storage mechanism to Copycat that works in distributed mode. 
> Copycat workers that start in distributed mode should use this implementation 
> by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)