[jira] [Commented] (KAFKA-2476) Define logical types for Copycat data API

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946244#comment-14946244
 ] 

ASF GitHub Bot commented on KAFKA-2476:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/281


> Define logical types for Copycat data API
> -
>
> Key: KAFKA-2476
> URL: https://issues.apache.org/jira/browse/KAFKA-2476
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> We need some common types like datetime and decimal. This boils down to 
> defining the schemas for these types, along with documenting their semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2476) Define logical types for Copycat data API

2015-10-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2476:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 281
[https://github.com/apache/kafka/pull/281]

> Define logical types for Copycat data API
> -
>
> Key: KAFKA-2476
> URL: https://issues.apache.org/jira/browse/KAFKA-2476
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> We need some common types like datetime and decimal. This boils down to 
> defining the schemas for these types, along with documenting their semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #6

2015-10-06 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2476: Add Decimal, Date, and Timestamp logical types.

--
[...truncated 859 lines...]
:81:
 warning: no @param for valueSerializer
public MockProducer(boolean autoComplete, Serializer keySerializer, 
Serializer valueSerializer) {
   ^
:39:
 warning: no @return
public int partition(String topic, Object key, byte[] keyBytes, Object 
value, byte[] valueBytes, Cluster cluster);
   ^
:71:
 warning: no @return
public String topic() {
  ^
:78:
 warning: no @return
public K key() {
 ^
:92:
 warning: no @return
public Integer partition() {
   ^
:44:
 warning: no @return
public long offset() {
^
:51:
 warning: no @return
public String topic() {
  ^
:58:
 warning: no @return
public int partition() {
   ^

76 warnings
:kafka-trunk-jdk8:log4j-appender:compileJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
1 warning

:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes
:kafka-trunk-jdk8:log4j-appender:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:259:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:277:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:278:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:380:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc 

[GitHub] kafka pull request: KAFKA-2476: Add Decimal, Date, and Timestamp l...

2015-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/281


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2015-10-06 Thread Joel Koshy
Thanks for the write-up and discussion. This would have been super
useful in our last round of deployments at LinkedIn for which we ended
up having to hack around a number of incompatibilities. I could list
all of the compatibility issues that have hit us, but some are
irrelevant to this specific KIP (e.g., ZooKeeper registration
versions). So I should perhaps just list two that I think are
relevant:

First, is the way that our metrics collection works. We have a special
metrics producer on every service that emits to a separate metrics
cluster. Kafka brokers also use this producer to emit to the
(separate) metrics cluster. So when we upgrade our test clusters, the
metrics producer in those clusters end up sending the latest produce
request version to the yet to be upgraded metrics cluster. This caused
an issue for us in the last round of deployments which bumped up the
protocol version for the quota-related throttle-time response field.
We got around that by just setting the metrics producer requiredAcks
to zero (since the error occurs on parsing the response - and the old
broker fortunately did not check the request version).

Second, the inter-broker protocol versioning scheme works fine across
official Apache releases but we picked up intermediate versions that
contained some request version bumps, and then follow-up versions that
picked up some more request bumps. For people deploying off trunk,
protocol version lookup would help.

General comments on the discussion and KIP:

I like Grant’s suggestion on using this to avoid the explicit
inter-broker-protocol-version - this will not only help address the
second compatibility issue above, but I’m all for anything that
eliminates an hour of config deployment (our deployments can take that
long!)

+1 on explicit response fields vs. key-value pairs - I don’t see this
reflected on the wiki though.

Aggregate protocol version vs specific request version: so you are
associating an increasing aggregate version (for each request version
bump). It may be useful to allow look up of the supported version (or
version range) for each request type. The BrokerMetadataResponse could
alternately return a vector of supported version ranges for each
request type.

Error response for unrecognized request versions: One option raised in
the discussion was to always include the highest supported version of
that request type in the response, but it may be worthwhile avoiding
that (since it is irrelevant most of the time) and fold that into the
BrokerMetadataRequest instead.

Max-message size/compression-codec: I actually prefer having this only
in TopicMetadataResponse and leave it out of the
BrokerMetadataRequest/Response (even for the defaults) since these are
really topic-specific fields. Rack-info on the other hand should
probably be there (at some point) in the BrokerMetadataResponse, and
this should perhaps be just a raw string that would need some
pluggable (deployment-specific) parsing.

Thanks,

Joel


On Wed, Sep 30, 2015 at 3:18 PM, Magnus Edenhill  wrote:
> Everyone, thanks for your comments and input this far, here
> follows an update of the proposal based on the discussions.
>
>
>  BrokerMetadataRequest => [NodeId]
>NodeId => int32   // Request Metadata for these brokers only.
>  // Empty array: retrieve for all brokers.
>  // Use -1 for current broker only.
>
>  BrokerMetadataResponse => [BrokerId Host Port ProtocolVersionMin
> ProtocolVersionMax [Key Value]]
>   NodeId => int32  // Broker NodeId
>   Host => string   // Broker Host
>   Port => int32// Broker Port
>   ProtocolVersionMin => int32  // Broker's minimum supported protocol
> version
>   ProtocolVersionMax => int32  // Broker's maximum supported protocol
> version
>   Key => string// Tag name
>   Value => stirng  // Tag value
>
>
> Builtin tags:
>  "broker.id"  = "9"
>  "broker.version" = "0.9.0.0-SNAPSHOT-d12ca4f"
>  "broker.version.int" = "0x0009"
>  "compression.codecs" = "gzip,snappy,lz4"
>  "message.max.bytes"  = "100"
>  "message.formats"= "v1,v2"  // KIP-31
>  "endpoints"  = "plaintext://host:9092,ssl://host:9192"
>
> These are all documented, including their value format and how to parse it.
>
> The "broker.id" has multiple purposes:
>  * allows upgrading the bootstrap broker connection to a proper one since
> the
>broker_id is initially not known, but would be with this.
>  * verifying that the broker connected to is actually the broker id that
> was learnt
>through TopicMetadata.
>
>
> The BrokerMetadata may be used in broker-broker communication during
> upgrades
> to decide on a common protocol version between brokers with different
> versions.
>
>
>
> User-provided tags (server.properties), examples:
>  "aws.zone"   = "eu-central-1"
>  "rack"   = "r8a9"
>  "cluster"= "kafka3"
>
> User 

[GitHub] kafka-site pull request: Add Kafka 0.8.2.2 release to downloads pa...

2015-10-06 Thread omkreddy
Github user omkreddy commented on the pull request:

https://github.com/apache/kafka-site/pull/2#issuecomment-145908088
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2527) System Test for Quotas in Ducktape

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946226#comment-14946226
 ] 

ASF GitHub Bot commented on KAFKA-2527:
---

GitHub user lindong28 reopened a pull request:

https://github.com/apache/kafka/pull/275

KAFKA-2527; System Test for Quotas in Ducktape

@granders Can you take a look at this quota system test?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-2527

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/275.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #275


commit cf1a116e79df10e1425a41efa2a25ba013fb7e19
Author: Dong Lin 
Date:   2015-10-05T07:07:01Z

KAFKA-2527; System Test for Quotas in Ducktape

commit 345b99705fe8c63798b0e7fe9e941bd22e4e1ad2
Author: Dong Lin 
Date:   2015-10-05T21:35:56Z

adjust quota configuration

commit 28bd200713b2a34178cbae093c5ef85ef2370078
Author: Dong Lin 
Date:   2015-10-06T02:00:12Z

JmxMixin will subclass object

commit 0d07fc73fb2776739d6a8f1de0e175afa86f7825
Author: Dong Lin 
Date:   2015-10-06T04:44:30Z

support jmx query with arbitrary object name and attributes

commit c0fd7682212709ab6612afd8e27d1d531616a863
Author: Dong Lin 
Date:   2015-10-06T16:17:43Z

jmx_object_name is required to use jmx tool

commit f7cfad50f47f2c367bf002a227185a42faf7486e
Author: Dong Lin 
Date:   2015-10-06T17:07:30Z

adjust quota test configuration




> System Test for Quotas in Ducktape
> --
>
> Key: KAFKA-2527
> URL: https://issues.apache.org/jira/browse/KAFKA-2527
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2527; System Test for Quotas in Ducktape

2015-10-06 Thread lindong28
GitHub user lindong28 reopened a pull request:

https://github.com/apache/kafka/pull/275

KAFKA-2527; System Test for Quotas in Ducktape

@granders Can you take a look at this quota system test?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-2527

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/275.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #275


commit cf1a116e79df10e1425a41efa2a25ba013fb7e19
Author: Dong Lin 
Date:   2015-10-05T07:07:01Z

KAFKA-2527; System Test for Quotas in Ducktape

commit 345b99705fe8c63798b0e7fe9e941bd22e4e1ad2
Author: Dong Lin 
Date:   2015-10-05T21:35:56Z

adjust quota configuration

commit 28bd200713b2a34178cbae093c5ef85ef2370078
Author: Dong Lin 
Date:   2015-10-06T02:00:12Z

JmxMixin will subclass object

commit 0d07fc73fb2776739d6a8f1de0e175afa86f7825
Author: Dong Lin 
Date:   2015-10-06T04:44:30Z

support jmx query with arbitrary object name and attributes

commit c0fd7682212709ab6612afd8e27d1d531616a863
Author: Dong Lin 
Date:   2015-10-06T16:17:43Z

jmx_object_name is required to use jmx tool

commit f7cfad50f47f2c367bf002a227185a42faf7486e
Author: Dong Lin 
Date:   2015-10-06T17:07:30Z

adjust quota test configuration




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2527; System Test for Quotas in Ducktape

2015-10-06 Thread lindong28
Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/275


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2527) System Test for Quotas in Ducktape

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946225#comment-14946225
 ] 

ASF GitHub Bot commented on KAFKA-2527:
---

Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/275


> System Test for Quotas in Ducktape
> --
>
> Key: KAFKA-2527
> URL: https://issues.apache.org/jira/browse/KAFKA-2527
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2615) Poll() method is broken wrt time

2015-10-06 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-2615:
---

 Summary: Poll() method is broken wrt time
 Key: KAFKA-2615
 URL: https://issues.apache.org/jira/browse/KAFKA-2615
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer, producer 
Affects Versions: 0.8.2.1
Reporter: Eno Thereska
Assignee: Eno Thereska


Initially reported by [~ewencp] and discussed with [~hachikuji]. In 
NetworkClient.java, the poll() method receives as input a "now" parameter, does 
a whole bunch of work (e.g., selector.poll()) and then keeps using "now" in all 
the subsequent method calls. 

Passing Time everywhere instead of "now" is a potential fix, but might be 
expensive since it's a new system call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2616) Improve Kakfa client exceptions

2015-10-06 Thread Hurshal Patel (JIRA)
Hurshal Patel created KAFKA-2616:


 Summary: Improve Kakfa client exceptions
 Key: KAFKA-2616
 URL: https://issues.apache.org/jira/browse/KAFKA-2616
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2.1
Reporter: Hurshal Patel
Priority: Minor


Any sort of network failure results in a {{java.nio.ClosedChannelException}} 
which is bubbled up from {{kafka.network.BlockingChannel}}. 

Displaying such an exception to a user with little knowledge about Kafka can be 
more confusing than informative. A better user experience for the Kafka 
consumer would be to throw a more appropriately named exception when a 
{{ClosedChannelException}} is encountered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUCE] Apache Kafka 0.8.2.2 Released

2015-10-06 Thread Ismael Juma
On Sat, Oct 3, 2015 at 4:36 PM, Jun Rao  wrote:
>
> We will update the download link in our website shortly.
>

The download page has been updated:

http://kafka.apache.org/downloads.html

Ismael


[jira] [Created] (KAFKA-2614) No more clients can connect after `TooManyConnectionsException` threshold (max.connections.per.ip) is reached

2015-10-06 Thread Stephen Chu (JIRA)
Stephen Chu created KAFKA-2614:
--

 Summary: No more clients can connect after 
`TooManyConnectionsException` threshold (max.connections.per.ip) is reached
 Key: KAFKA-2614
 URL: https://issues.apache.org/jira/browse/KAFKA-2614
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
 Environment: Debian Jessie
Reporter: Stephen Chu
Priority: Critical


It seems no more clients can connect to Kafka after `max.connections.per.ip` is 
reached, even if previous clients were already disconnected.

Using 0.8.3 (9c936b18), upon starting a fresh Kafka server that is configured 
with (max.connections.per.ip = 24), I noticed that I can cause the server to 
hit the error case of {{INFO Rejected connection from /0:0:0:0:0:0:0:1, address 
already has the configured maximum of 24 connections.}} very quickly, by simply 
looping through a bunch of simple clients against the server:
{noformat}
#! /bin/bash

for i in {1..30}; do
# either:
nc -vz 127.0.0.1 9092;
# or:
( telnet 127.0.0.1 9092; ) &
done

# if using telnet, kill all connected jobs now via:
kill %{2..31}
{noformat}

The problem seems to be that the counter for such short-lived client 
connections aren't properly decrementing when using the 
`max.connections.per.ip` feature.

Turning on DEBUG logs, I cannot see the log lines "Closing connection from xxx" 
on [this 
line|https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/network/SocketServer.scala#L164]
 from the first few still-under-threshold short-lived connections, but starts 
showing *after* I hit the limit per that config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2599) Metadata#getClusterForCurrentTopics can throw NPE even with null checking

2015-10-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2599:
---
 Reviewer: Guozhang Wang
Fix Version/s: (was: 0.8.1.2)

Trivial fix for NPE.

> Metadata#getClusterForCurrentTopics can throw NPE even with null checking
> -
>
> Key: KAFKA-2599
> URL: https://issues.apache.org/jira/browse/KAFKA-2599
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Edward Ribeiro
>Assignee: Edward Ribeiro
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> While working on another issue I have just seen the following:
> {code}
> private Cluster getClusterForCurrentTopics(Cluster cluster) {
> Collection partitionInfos = new ArrayList<>();
> if (cluster != null) {
> for (String topic : this.topics) {
> partitionInfos.addAll(cluster.partitionsForTopic(topic));
> }
> }
> return new Cluster(cluster.nodes(), partitionInfos);
> }
> {code}
> Well, there's a null check for cluster, but if cluster is null it will throw 
> NPE at the return line by calling {{cluster.nodes()}}! So, I put together a 
> quick fix and changed {{MetadataTest}} to reproduce this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2615) Poll() method is broken wrt time

2015-10-06 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945696#comment-14945696
 ] 

Jiangjie Qin commented on KAFKA-2615:
-

Is this only a problem of consumer or also a problem of producer? In producer, 
each NetworkClient.poll() does take a new timestamp for each poll(), right?

> Poll() method is broken wrt time
> 
>
> Key: KAFKA-2615
> URL: https://issues.apache.org/jira/browse/KAFKA-2615
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>
> Initially reported by [~ewencp] and discussed with [~hachikuji]. In 
> NetworkClient.java, the poll() method receives as input a "now" parameter, 
> does a whole bunch of work (e.g., selector.poll()) and then keeps using "now" 
> in all the subsequent method calls. 
> Passing Time everywhere instead of "now" is a potential fix, but might be 
> expensive since it's a new system call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2527; System Test for Quotas in Ducktape

2015-10-06 Thread lindong28
Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/275


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2527; System Test for Quotas in Ducktape

2015-10-06 Thread lindong28
GitHub user lindong28 reopened a pull request:

https://github.com/apache/kafka/pull/275

KAFKA-2527; System Test for Quotas in Ducktape

@granders Can you take a look at this quota system test?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-2527

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/275.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #275


commit cf1a116e79df10e1425a41efa2a25ba013fb7e19
Author: Dong Lin 
Date:   2015-10-05T07:07:01Z

KAFKA-2527; System Test for Quotas in Ducktape

commit 345b99705fe8c63798b0e7fe9e941bd22e4e1ad2
Author: Dong Lin 
Date:   2015-10-05T21:35:56Z

adjust quota configuration

commit 28bd200713b2a34178cbae093c5ef85ef2370078
Author: Dong Lin 
Date:   2015-10-06T02:00:12Z

JmxMixin will subclass object

commit 0d07fc73fb2776739d6a8f1de0e175afa86f7825
Author: Dong Lin 
Date:   2015-10-06T04:44:30Z

support jmx query with arbitrary object name and attributes

commit c0fd7682212709ab6612afd8e27d1d531616a863
Author: Dong Lin 
Date:   2015-10-06T16:17:43Z

jmx_object_name is required to use jmx tool

commit f7cfad50f47f2c367bf002a227185a42faf7486e
Author: Dong Lin 
Date:   2015-10-06T17:07:30Z

adjust quota test configuration




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2615) Poll() method is broken wrt time

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945687#comment-14945687
 ] 

Ismael Juma commented on KAFKA-2615:


Yeah, we definitely want to pass the `Time` instance to the `NetworkClient` 
constructor instead of passing it to each method.

> Poll() method is broken wrt time
> 
>
> Key: KAFKA-2615
> URL: https://issues.apache.org/jira/browse/KAFKA-2615
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>
> Initially reported by [~ewencp] and discussed with [~hachikuji]. In 
> NetworkClient.java, the poll() method receives as input a "now" parameter, 
> does a whole bunch of work (e.g., selector.poll()) and then keeps using "now" 
> in all the subsequent method calls. 
> Passing Time everywhere instead of "now" is a potential fix, but might be 
> expensive since it's a new system call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2527) System Test for Quotas in Ducktape

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945693#comment-14945693
 ] 

ASF GitHub Bot commented on KAFKA-2527:
---

GitHub user lindong28 reopened a pull request:

https://github.com/apache/kafka/pull/275

KAFKA-2527; System Test for Quotas in Ducktape

@granders Can you take a look at this quota system test?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-2527

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/275.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #275


commit cf1a116e79df10e1425a41efa2a25ba013fb7e19
Author: Dong Lin 
Date:   2015-10-05T07:07:01Z

KAFKA-2527; System Test for Quotas in Ducktape

commit 345b99705fe8c63798b0e7fe9e941bd22e4e1ad2
Author: Dong Lin 
Date:   2015-10-05T21:35:56Z

adjust quota configuration

commit 28bd200713b2a34178cbae093c5ef85ef2370078
Author: Dong Lin 
Date:   2015-10-06T02:00:12Z

JmxMixin will subclass object

commit 0d07fc73fb2776739d6a8f1de0e175afa86f7825
Author: Dong Lin 
Date:   2015-10-06T04:44:30Z

support jmx query with arbitrary object name and attributes

commit c0fd7682212709ab6612afd8e27d1d531616a863
Author: Dong Lin 
Date:   2015-10-06T16:17:43Z

jmx_object_name is required to use jmx tool

commit f7cfad50f47f2c367bf002a227185a42faf7486e
Author: Dong Lin 
Date:   2015-10-06T17:07:30Z

adjust quota test configuration




> System Test for Quotas in Ducktape
> --
>
> Key: KAFKA-2527
> URL: https://issues.apache.org/jira/browse/KAFKA-2527
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2615) Poll() method is broken wrt time

2015-10-06 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945716#comment-14945716
 ] 

Jason Gustafson commented on KAFKA-2615:


[~becket_qin] I think it affects everything using NetworkClient. The problem is 
that the same timestamp is used before and after the call to Selector.poll(). 
This could skew timeout processing and would also affect metric reporting. Not 
sure if there are other implications as well.

> Poll() method is broken wrt time
> 
>
> Key: KAFKA-2615
> URL: https://issues.apache.org/jira/browse/KAFKA-2615
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>
> Initially reported by [~ewencp] and discussed with [~hachikuji]. In 
> NetworkClient.java, the poll() method receives as input a "now" parameter, 
> does a whole bunch of work (e.g., selector.poll()) and then keeps using "now" 
> in all the subsequent method calls. 
> Passing Time everywhere instead of "now" is a potential fix, but might be 
> expensive since it's a new system call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2599) Metadata#getClusterForCurrentTopics can throw NPE even with null checking

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945732#comment-14945732
 ] 

ASF GitHub Bot commented on KAFKA-2599:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/262


> Metadata#getClusterForCurrentTopics can throw NPE even with null checking
> -
>
> Key: KAFKA-2599
> URL: https://issues.apache.org/jira/browse/KAFKA-2599
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Edward Ribeiro
>Assignee: Edward Ribeiro
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> While working on another issue I have just seen the following:
> {code}
> private Cluster getClusterForCurrentTopics(Cluster cluster) {
> Collection partitionInfos = new ArrayList<>();
> if (cluster != null) {
> for (String topic : this.topics) {
> partitionInfos.addAll(cluster.partitionsForTopic(topic));
> }
> }
> return new Cluster(cluster.nodes(), partitionInfos);
> }
> {code}
> Well, there's a null check for cluster, but if cluster is null it will throw 
> NPE at the return line by calling {{cluster.nodes()}}! So, I put together a 
> quick fix and changed {{MetadataTest}} to reproduce this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2599) Metadata#getClusterForCurrentTopics can throw NPE even with null checking

2015-10-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2599:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 262
[https://github.com/apache/kafka/pull/262]

> Metadata#getClusterForCurrentTopics can throw NPE even with null checking
> -
>
> Key: KAFKA-2599
> URL: https://issues.apache.org/jira/browse/KAFKA-2599
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Edward Ribeiro
>Assignee: Edward Ribeiro
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> While working on another issue I have just seen the following:
> {code}
> private Cluster getClusterForCurrentTopics(Cluster cluster) {
> Collection partitionInfos = new ArrayList<>();
> if (cluster != null) {
> for (String topic : this.topics) {
> partitionInfos.addAll(cluster.partitionsForTopic(topic));
> }
> }
> return new Cluster(cluster.nodes(), partitionInfos);
> }
> {code}
> Well, there's a null check for cluster, but if cluster is null it will throw 
> NPE at the return line by calling {{cluster.nodes()}}! So, I put together a 
> quick fix and changed {{MetadataTest}} to reproduce this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2599 Metadata#getClusterForCurrentTopics...

2015-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/262


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2615) Poll() method is broken wrt time

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945635#comment-14945635
 ] 

Ismael Juma commented on KAFKA-2615:


This will also simplify the signature of many methods in `NetworkClient`.

> Poll() method is broken wrt time
> 
>
> Key: KAFKA-2615
> URL: https://issues.apache.org/jira/browse/KAFKA-2615
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>
> Initially reported by [~ewencp] and discussed with [~hachikuji]. In 
> NetworkClient.java, the poll() method receives as input a "now" parameter, 
> does a whole bunch of work (e.g., selector.poll()) and then keeps using "now" 
> in all the subsequent method calls. 
> Passing Time everywhere instead of "now" is a potential fix, but might be 
> expensive since it's a new system call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2615) Poll() method is broken wrt time

2015-10-06 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945641#comment-14945641
 ] 

Jason Gustafson commented on KAFKA-2615:


[~enothereska] Could we pull the Time object into NetworkClient? This might let 
us avoid unneeded system calls by exposing higher-level methods. For example, 
the usual pattern for sending requests looks something like this:
{code}
long now = time.milliseconds();
if (client.ready(node, now)) {
  client.send(request, now);
}
{code}
If the Time object was internal to NetworkClient, we could avoid the extra 
system call by combining this logic into a single method to avoid the 
additional system call:
{code}
boolean sendIfReady(Request request);
{code}

> Poll() method is broken wrt time
> 
>
> Key: KAFKA-2615
> URL: https://issues.apache.org/jira/browse/KAFKA-2615
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>
> Initially reported by [~ewencp] and discussed with [~hachikuji]. In 
> NetworkClient.java, the poll() method receives as input a "now" parameter, 
> does a whole bunch of work (e.g., selector.poll()) and then keeps using "now" 
> in all the subsequent method calls. 
> Passing Time everywhere instead of "now" is a potential fix, but might be 
> expensive since it's a new system call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2614) No more clients can connect after `TooManyConnectionsException` threshold (max.connections.per.ip) is reached

2015-10-06 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2614:
---
Fix Version/s: 0.9.0.0
  Component/s: core

> No more clients can connect after `TooManyConnectionsException` threshold 
> (max.connections.per.ip) is reached
> -
>
> Key: KAFKA-2614
> URL: https://issues.apache.org/jira/browse/KAFKA-2614
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: Debian Jessie
>Reporter: Stephen Chu
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> It seems no more clients can connect to Kafka after `max.connections.per.ip` 
> is reached, even if previous clients were already disconnected.
> Using 0.8.3 (9c936b18), upon starting a fresh Kafka server that is configured 
> with (max.connections.per.ip = 24), I noticed that I can cause the server to 
> hit the error case of {{INFO Rejected connection from /0:0:0:0:0:0:0:1, 
> address already has the configured maximum of 24 connections.}} very quickly, 
> by simply looping through a bunch of simple clients against the server:
> {noformat}
> #! /bin/bash
> for i in {1..30}; do
> # either:
> nc -vz 127.0.0.1 9092;
> # or:
> ( telnet 127.0.0.1 9092; ) &
> done
> # if using telnet, kill all connected jobs now via:
> kill %{2..31}
> {noformat}
> The problem seems to be that the counter for such short-lived client 
> connections aren't properly decrementing when using the 
> `max.connections.per.ip` feature.
> Turning on DEBUG logs, I cannot see the log lines "Closing connection from 
> xxx" on [this 
> line|https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/network/SocketServer.scala#L164]
>  from the first few still-under-threshold short-lived connections, but starts 
> showing *after* I hit the limit per that config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2482) Allow copycat sink tasks to pause/resume consumption of specific topic partitions

2015-10-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2482:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 249
[https://github.com/apache/kafka/pull/249]

> Allow copycat sink tasks to pause/resume consumption of specific topic 
> partitions
> -
>
> Key: KAFKA-2482
> URL: https://issues.apache.org/jira/browse/KAFKA-2482
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Consider a situation where a sink is assigned 2 topic partitions. One of them 
> runs into a transient issue and no more data from it can be processed. 
> However, the other topic partition is proceeding fine. We don't want to block 
> the second partition by constantly throwing exceptions due to data from the 
> first topic partition.
> The new consumer now supports pause/resume, so we should expose these to the 
> task. We could expose the functionality directly, although that would also 
> make the task responsible for scheduling some task in the future to check 
> whether it can resume. Another approach might be to make the API include the 
> backoff time. Then the framework would automatically resume consumption of 
> the topic partition after that time, which would presumably prompt the task 
> to reevaluate the situation for the topic partition when it receives another 
> message for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2482: Allow sink tasks to get their curr...

2015-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/249


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2482) Allow copycat sink tasks to pause/resume consumption of specific topic partitions

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945833#comment-14945833
 ] 

ASF GitHub Bot commented on KAFKA-2482:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/249


> Allow copycat sink tasks to pause/resume consumption of specific topic 
> partitions
> -
>
> Key: KAFKA-2482
> URL: https://issues.apache.org/jira/browse/KAFKA-2482
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Consider a situation where a sink is assigned 2 topic partitions. One of them 
> runs into a transient issue and no more data from it can be processed. 
> However, the other topic partition is proceeding fine. We don't want to block 
> the second partition by constantly throwing exceptions due to data from the 
> first topic partition.
> The new consumer now supports pause/resume, so we should expose these to the 
> task. We could expose the functionality directly, although that would also 
> make the task responsible for scheduling some task in the future to check 
> whether it can resume. Another approach might be to make the API include the 
> backoff time. Then the framework would automatically resume consumption of 
> the topic partition after that time, which would presumably prompt the task 
> to reevaluate the situation for the topic partition when it receives another 
> message for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2015-10-06 Thread Magnus Edenhill
After today's KIP call we decided on the following regarding KIP-35:
 * limit scope to just propagate supported API versions (no key-value tags,
broker info, etc)
 * let the new API return the full list of broker's supported ApiKeys and
ApiVersions, rather than an aggregated global version
 * ApiVersions are sorted in order of preference
 * rename API from BrokerMetadataRequest to ProtocolVersionRequest
 * workaround for disconnect-on-unknown-api-request remains valid.

The wiki page has been updated accordingly:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version

Thanks,
Magnus


2015-10-06 17:28 GMT+02:00 Joel Koshy :

> Thanks for the write-up and discussion. This would have been super
> useful in our last round of deployments at LinkedIn for which we ended
> up having to hack around a number of incompatibilities. I could list
> all of the compatibility issues that have hit us, but some are
> irrelevant to this specific KIP (e.g., ZooKeeper registration
> versions). So I should perhaps just list two that I think are
> relevant:
>
> First, is the way that our metrics collection works. We have a special
> metrics producer on every service that emits to a separate metrics
> cluster. Kafka brokers also use this producer to emit to the
> (separate) metrics cluster. So when we upgrade our test clusters, the
> metrics producer in those clusters end up sending the latest produce
> request version to the yet to be upgraded metrics cluster. This caused
> an issue for us in the last round of deployments which bumped up the
> protocol version for the quota-related throttle-time response field.
> We got around that by just setting the metrics producer requiredAcks
> to zero (since the error occurs on parsing the response - and the old
> broker fortunately did not check the request version).
>
> Second, the inter-broker protocol versioning scheme works fine across
> official Apache releases but we picked up intermediate versions that
> contained some request version bumps, and then follow-up versions that
> picked up some more request bumps. For people deploying off trunk,
> protocol version lookup would help.
>
> General comments on the discussion and KIP:
>
> I like Grant’s suggestion on using this to avoid the explicit
> inter-broker-protocol-version - this will not only help address the
> second compatibility issue above, but I’m all for anything that
> eliminates an hour of config deployment (our deployments can take that
> long!)
>
> +1 on explicit response fields vs. key-value pairs - I don’t see this
> reflected on the wiki though.
>
> Aggregate protocol version vs specific request version: so you are
> associating an increasing aggregate version (for each request version
> bump). It may be useful to allow look up of the supported version (or
> version range) for each request type. The BrokerMetadataResponse could
> alternately return a vector of supported version ranges for each
> request type.
>
> Error response for unrecognized request versions: One option raised in
> the discussion was to always include the highest supported version of
> that request type in the response, but it may be worthwhile avoiding
> that (since it is irrelevant most of the time) and fold that into the
> BrokerMetadataRequest instead.
>
> Max-message size/compression-codec: I actually prefer having this only
> in TopicMetadataResponse and leave it out of the
> BrokerMetadataRequest/Response (even for the defaults) since these are
> really topic-specific fields. Rack-info on the other hand should
> probably be there (at some point) in the BrokerMetadataResponse, and
> this should perhaps be just a raw string that would need some
> pluggable (deployment-specific) parsing.
>
> Thanks,
>
> Joel
>
>
> On Wed, Sep 30, 2015 at 3:18 PM, Magnus Edenhill 
> wrote:
> > Everyone, thanks for your comments and input this far, here
> > follows an update of the proposal based on the discussions.
> >
> >
> >  BrokerMetadataRequest => [NodeId]
> >NodeId => int32   // Request Metadata for these brokers only.
> >  // Empty array: retrieve for all brokers.
> >  // Use -1 for current broker only.
> >
> >  BrokerMetadataResponse => [BrokerId Host Port ProtocolVersionMin
> > ProtocolVersionMax [Key Value]]
> >   NodeId => int32  // Broker NodeId
> >   Host => string   // Broker Host
> >   Port => int32// Broker Port
> >   ProtocolVersionMin => int32  // Broker's minimum supported protocol
> > version
> >   ProtocolVersionMax => int32  // Broker's maximum supported protocol
> > version
> >   Key => string// Tag name
> >   Value => stirng  // Tag value
> >
> >
> > Builtin tags:
> >  "broker.id"  = "9"
> >  "broker.version" = "0.9.0.0-SNAPSHOT-d12ca4f"
> >  "broker.version.int" = "0x0009"
> >  "compression.codecs" = "gzip,snappy,lz4"
> >  

Build failed in Jenkins: kafka-trunk-jdk8 #4

2015-10-06 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2599: Fix Metadata.getClusterForCurrentTopics throws NPE

[wangguoz] KAFKA-2573: Mirror maker system test hangs and eventually fails

[wangguoz] KAFKA-2482: Allow sink tasks to get their current assignment, as 
well as

--
[...truncated 857 lines...]
:81:
 warning: no @param for valueSerializer
public MockProducer(boolean autoComplete, Serializer keySerializer, 
Serializer valueSerializer) {
   ^
:39:
 warning: no @return
public int partition(String topic, Object key, byte[] keyBytes, Object 
value, byte[] valueBytes, Cluster cluster);
   ^
:71:
 warning: no @return
public String topic() {
  ^
:78:
 warning: no @return
public K key() {
 ^
:92:
 warning: no @return
public Integer partition() {
   ^
:44:
 warning: no @return
public long offset() {
^
:51:
 warning: no @return
public String topic() {
  ^
:58:
 warning: no @return
public int partition() {
   ^

76 warnings
:kafka-trunk-jdk8:log4j-appender:compileJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
1 warning

:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes
:kafka-trunk-jdk8:log4j-appender:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:259:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:277:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:278:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^

Jenkins build is back to normal : kafka-trunk-jdk7 #661

2015-10-06 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: KAFKA-2474: Add caching of JSON schema convers...

2015-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/250


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Kafka KIP meeting Oct 6 at 11:00am PST

2015-10-06 Thread Jun Rao
The following are the notes from today's KIP discussion.

We only had the time to go through KIP-35. The consensus is that we will
add a BrokerProtocolRequest that returns the supported versions for every
type of requests. It's up to the client to decide how to use this. Magnus
will update the KIP wiki with more details.

The video will be uploaded soon in
https://cwiki.apache.org/confluence/display/KAFKA/Kafka
+Improvement+Proposals .

Thanks,

Jun

On Mon, Oct 5, 2015 at 2:13 PM, Jun Rao  wrote:

> Hi, Everyone,
>
> We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
> attend but haven't received an invite, please let me know. The following is
> the agenda.
>
> Agenda:
> 1. KIP-32: Add CreateTime and LogAppendTime to Kafka message
> 2. KIP-33: Add a time based log index
> 3. KIP-35: Add request for retrieving broker protocol versions.
> 4. KIP-36: Add rack-aware support
>
> Thanks,
>
> Jun
>


[jira] [Created] (KAFKA-2617) Move protocol field default values to Protocol

2015-10-06 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2617:


 Summary: Move protocol field default values to Protocol
 Key: KAFKA-2617
 URL: https://issues.apache.org/jira/browse/KAFKA-2617
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
 Fix For: 0.9.0.0


Right now the default values are scattered in the Request / Response classes, 
and some duplicates already exists like JoinGroupRequest.UNKNOWN_CONSUMER_ID 
and OffsetCommitRequest.DEFAULT_CONSUMER_ID. We would like to move all default 
values into org.apache.kafka.common.protocol.Protocol since 
org.apache.kafka.common.requests depends on org.apache.kafka.common.protocol 
anyways.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-31 - Move to relative offsets in compressed message sets.

2015-10-06 Thread Jiangjie Qin
Hi folks,

Sorry for this prolonged voting session and thanks for the votes.

There is an additional broker configuration change added to the KIP after
the vote. We propose to add a message.format.version configuration to the
broker to indicate which version it should use to store the message on disk.

It is mainly trying to minimize the format conversion for consumption
during rolling out. Because the client upgrade could take some time and it
can be expensive to give up zero-copy for the majority of the consumers, we
want to avoid doing that.

I would like to see if people have concerns over this change or not. If
there is no concerns, I will close the vote as passed. Otherwise I will
initiate another vote.

Thanks,

Jiangjie (Becket) QIn


On Fri, Sep 25, 2015 at 4:41 PM, Ewen Cheslack-Postava 
wrote:

> +1
>
> -Ewen
>
> On Fri, Sep 25, 2015 at 11:15 AM, Jun Rao  wrote:
>
> > +1. I agree that it's worth thinking through the migration plan a bit
> more.
> >
> > Thanks,
> >
> > Jun
> >
> > On Thu, Sep 24, 2015 at 6:14 PM, Joel Koshy  wrote:
> >
> > > +1 on everything but the upgrade plan, which is a bit scary - will
> > > comment on the discuss thread.
> > >
> > > On Thu, Sep 24, 2015 at 9:51 AM, Mayuresh Gharat
> > >  wrote:
> > > > +1
> > > >
> > > > On Wed, Sep 23, 2015 at 10:16 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > >> +1
> > > >>
> > > >> On Wed, Sep 23, 2015 at 9:32 PM, Aditya Auradkar <
> > > >> aaurad...@linkedin.com.invalid> wrote:
> > > >>
> > > >> > +1
> > > >> >
> > > >> > On Wed, Sep 23, 2015 at 8:03 PM, Neha Narkhede  >
> > > >> wrote:
> > > >> >
> > > >> > > +1
> > > >> > >
> > > >> > > On Wed, Sep 23, 2015 at 6:21 PM, Todd Palino  >
> > > >> wrote:
> > > >> > >
> > > >> > > > +1000
> > > >> > > >
> > > >> > > > !
> > > >> > > >
> > > >> > > > -Todd
> > > >> > > >
> > > >> > > > On Wednesday, September 23, 2015, Jiangjie Qin
> > > >> >  > > >> > > >
> > > >> > > > wrote:
> > > >> > > >
> > > >> > > > > Hi,
> > > >> > > > >
> > > >> > > > > Thanks a lot for the reviews and feedback on KIP-31. It
> looks
> > > all
> > > >> the
> > > >> > > > > concerns of the KIP has been addressed. I would like to
> start
> > > the
> > > >> > > voting
> > > >> > > > > process.
> > > >> > > > >
> > > >> > > > > The short summary for the KIP:
> > > >> > > > > We are going to use the relative offset in the message
> format
> > to
> > > >> > avoid
> > > >> > > > > server side recompression.
> > > >> > > > >
> > > >> > > > > In case you haven't got a chance to check, here is the KIP
> > link.
> > > >> > > > >
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-31+-+Move+to+relative+offsets+in+compressed+message+sets
> > > >> > > > >
> > > >> > > > > Thanks,
> > > >> > > > >
> > > >> > > > > Jiangjie (Becket) Qin
> > > >> > > > >
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > --
> > > >> > > Thanks,
> > > >> > > Neha
> > > >> > >
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> -- Guozhang
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > -Regards,
> > > > Mayuresh R. Gharat
> > > > (862) 250-7125
> > >
> >
>
>
>
> --
> Thanks,
> Ewen
>


[GitHub] kafka-site pull request: updated contributing.html

2015-10-06 Thread guozhangwang
Github user guozhangwang commented on the pull request:

https://github.com/apache/kafka-site/pull/1#issuecomment-146017600
  
LGTM, thanks @omkreddy .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Email to dev list for GitHub PR comments

2015-10-06 Thread Guozhang Wang
I think github cannot batch comments in emails (yet?), which is sad..

I would prefer to keep both github@kafka / github@kafka-site to send only
open/close PRs unless you subscribe to some tickets.

Guozhang

On Tue, Oct 6, 2015 at 9:49 AM, Jiangjie Qin 
wrote:

> Hi Ismael,
>
> Thanks for bringing this up. Completely agree the exploding amount of
> emails is a little annoying, regardless they are sent to dev list or
> personal emails.
>
> Not sure whether it is doable or not, but here is what I am thinking.
> 1. batch the comments email and send periodically to dev list or project
> subscribers. e.g. 4 hours a day.
> 2. direct email the PR submitter/reviewers when comments are put.
>
> Not sure if github can do that or not. Maybe worth sending email to ask.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
> On Tue, Oct 6, 2015 at 1:35 AM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > You may have noticed that we receive one email for each comment in
> > kafka-site pull requests. We don't have that enabled for the kafka (ie
> > code) repository. Maybe that's OK as the number of emails would be much
> > higher for the code repository, but I thought it would be good to get
> other
> > people's opinions on it.
> >
> > So, for the code repository, would you prefer if:
> >
> > 1. We leave things as they are (emails to dev list are sent for
> > opening/closing of PRs and other notifications are handled by one's own
> > GitHub notification settings)
> > 2. We change it to be like kafka-site and an email is sent to the dev
> list
> > for each PR comment
> > 3. Something else
> >
> > Ismael
> >
>



-- 
-- Guozhang


[jira] [Commented] (KAFKA-2474) Add caching for converted Copycat schemas in JSONConverter

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945936#comment-14945936
 ] 

ASF GitHub Bot commented on KAFKA-2474:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/250


> Add caching for converted Copycat schemas in JSONConverter
> --
>
> Key: KAFKA-2474
> URL: https://issues.apache.org/jira/browse/KAFKA-2474
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> From discussion of KAFKA-2367:
> bq. Caching of conversion of schemas. In the JSON implementation we're 
> including, we're probably being pretty wasteful right now since every record 
> has to translate both the schema and data to JSON. We should definitely be 
> doing some caching here. I think an LRU using an IdentityHashMap should be 
> fine. However, this does assume that connectors are good about reusing 
> schemas (defining them up front, or if they are dynamic they should have 
> their own cache of schemas and be able to detect when they can be reused).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2474) Add caching for converted Copycat schemas in JSONConverter

2015-10-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2474:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 250
[https://github.com/apache/kafka/pull/250]

> Add caching for converted Copycat schemas in JSONConverter
> --
>
> Key: KAFKA-2474
> URL: https://issues.apache.org/jira/browse/KAFKA-2474
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> From discussion of KAFKA-2367:
> bq. Caching of conversion of schemas. In the JSON implementation we're 
> including, we're probably being pretty wasteful right now since every record 
> has to translate both the schema and data to JSON. We should definitely be 
> doing some caching here. I think an LRU using an IdentityHashMap should be 
> fine. However, this does assume that connectors are good about reusing 
> schemas (defining them up front, or if they are dynamic they should have 
> their own cache of schemas and be able to detect when they can be reused).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2573) Mirror maker system test hangs and eventually fails

2015-10-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2573:
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 234
[https://github.com/apache/kafka/pull/234]

> Mirror maker system test hangs and eventually fails
> ---
>
> Key: KAFKA-2573
> URL: https://issues.apache.org/jira/browse/KAFKA-2573
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> Due to changes made in KAFKA-2015, handling of {{--consumer.config}} has 
> changed, more details is specified on KAFKA-2467. This leads to the exception.
> {code}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
>   at kafka.utils.Pool.keys(Pool.scala:77)
>   at 
> kafka.consumer.FetchRequestAndResponseStatsRegistry$.removeConsumerFetchRequestAndResponseStats(FetchRequestAndResponseStats.scala:69)
>   at 
> kafka.metrics.KafkaMetricsGroup$.removeAllConsumerMetrics(KafkaMetricsGroup.scala:189)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:200)
>   at kafka.consumer.OldConsumer.stop(BaseConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:98)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:57)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:41)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2573) Mirror maker system test hangs and eventually fails

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945805#comment-14945805
 ] 

ASF GitHub Bot commented on KAFKA-2573:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/234


> Mirror maker system test hangs and eventually fails
> ---
>
> Key: KAFKA-2573
> URL: https://issues.apache.org/jira/browse/KAFKA-2573
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> Due to changes made in KAFKA-2015, handling of {{--consumer.config}} has 
> changed, more details is specified on KAFKA-2467. This leads to the exception.
> {code}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
>   at kafka.utils.Pool.keys(Pool.scala:77)
>   at 
> kafka.consumer.FetchRequestAndResponseStatsRegistry$.removeConsumerFetchRequestAndResponseStats(FetchRequestAndResponseStats.scala:69)
>   at 
> kafka.metrics.KafkaMetricsGroup$.removeAllConsumerMetrics(KafkaMetricsGroup.scala:189)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:200)
>   at kafka.consumer.OldConsumer.stop(BaseConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:98)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:57)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:41)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2573: Mirror maker system test hangs and...

2015-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/234


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2615) Poll() method is broken wrt time

2015-10-06 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945849#comment-14945849
 ] 

Jiangjie Qin commented on KAFKA-2615:
-

[~hachikuji] I see. Yes, that is an issue. It actually might impact the request 
timeout settings. Although I kind of like the unified logic clock, we may have 
to change it. 

One alternative could be let the selector.poll() return the time elapsed 
instead of void. And use the do something like now += selector.poll(timeout). 
This might be a little hacky but less time skew sensitive because we keep the 
logic clock.





> Poll() method is broken wrt time
> 
>
> Key: KAFKA-2615
> URL: https://issues.apache.org/jira/browse/KAFKA-2615
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>
> Initially reported by [~ewencp] and discussed with [~hachikuji]. In 
> NetworkClient.java, the poll() method receives as input a "now" parameter, 
> does a whole bunch of work (e.g., selector.poll()) and then keeps using "now" 
> in all the subsequent method calls. 
> Passing Time everywhere instead of "now" is a potential fix, but might be 
> expensive since it's a new system call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2015-10-06 Thread Jason Gustafson
It would be nice to get the unknown api workaround into 0.9, even if we
can't have ProtocolVersionRequest. It should be a small change and it
allows clients going forward to detect when they have connected to an old
broker, which lets them surface a helpful error to the user. This is much
better than retrying indefinitely which is probably how most current
clients handle this case.

-Jason

On Tue, Oct 6, 2015 at 2:25 PM, Magnus Edenhill  wrote:

> After today's KIP call we decided on the following regarding KIP-35:
>  * limit scope to just propagate supported API versions (no key-value tags,
> broker info, etc)
>  * let the new API return the full list of broker's supported ApiKeys and
> ApiVersions, rather than an aggregated global version
>  * ApiVersions are sorted in order of preference
>  * rename API from BrokerMetadataRequest to ProtocolVersionRequest
>  * workaround for disconnect-on-unknown-api-request remains valid.
>
> The wiki page has been updated accordingly:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version
>
> Thanks,
> Magnus
>
>
> 2015-10-06 17:28 GMT+02:00 Joel Koshy :
>
> > Thanks for the write-up and discussion. This would have been super
> > useful in our last round of deployments at LinkedIn for which we ended
> > up having to hack around a number of incompatibilities. I could list
> > all of the compatibility issues that have hit us, but some are
> > irrelevant to this specific KIP (e.g., ZooKeeper registration
> > versions). So I should perhaps just list two that I think are
> > relevant:
> >
> > First, is the way that our metrics collection works. We have a special
> > metrics producer on every service that emits to a separate metrics
> > cluster. Kafka brokers also use this producer to emit to the
> > (separate) metrics cluster. So when we upgrade our test clusters, the
> > metrics producer in those clusters end up sending the latest produce
> > request version to the yet to be upgraded metrics cluster. This caused
> > an issue for us in the last round of deployments which bumped up the
> > protocol version for the quota-related throttle-time response field.
> > We got around that by just setting the metrics producer requiredAcks
> > to zero (since the error occurs on parsing the response - and the old
> > broker fortunately did not check the request version).
> >
> > Second, the inter-broker protocol versioning scheme works fine across
> > official Apache releases but we picked up intermediate versions that
> > contained some request version bumps, and then follow-up versions that
> > picked up some more request bumps. For people deploying off trunk,
> > protocol version lookup would help.
> >
> > General comments on the discussion and KIP:
> >
> > I like Grant’s suggestion on using this to avoid the explicit
> > inter-broker-protocol-version - this will not only help address the
> > second compatibility issue above, but I’m all for anything that
> > eliminates an hour of config deployment (our deployments can take that
> > long!)
> >
> > +1 on explicit response fields vs. key-value pairs - I don’t see this
> > reflected on the wiki though.
> >
> > Aggregate protocol version vs specific request version: so you are
> > associating an increasing aggregate version (for each request version
> > bump). It may be useful to allow look up of the supported version (or
> > version range) for each request type. The BrokerMetadataResponse could
> > alternately return a vector of supported version ranges for each
> > request type.
> >
> > Error response for unrecognized request versions: One option raised in
> > the discussion was to always include the highest supported version of
> > that request type in the response, but it may be worthwhile avoiding
> > that (since it is irrelevant most of the time) and fold that into the
> > BrokerMetadataRequest instead.
> >
> > Max-message size/compression-codec: I actually prefer having this only
> > in TopicMetadataResponse and leave it out of the
> > BrokerMetadataRequest/Response (even for the defaults) since these are
> > really topic-specific fields. Rack-info on the other hand should
> > probably be there (at some point) in the BrokerMetadataResponse, and
> > this should perhaps be just a raw string that would need some
> > pluggable (deployment-specific) parsing.
> >
> > Thanks,
> >
> > Joel
> >
> >
> > On Wed, Sep 30, 2015 at 3:18 PM, Magnus Edenhill 
> > wrote:
> > > Everyone, thanks for your comments and input this far, here
> > > follows an update of the proposal based on the discussions.
> > >
> > >
> > >  BrokerMetadataRequest => [NodeId]
> > >NodeId => int32   // Request Metadata for these brokers only.
> > >  // Empty array: retrieve for all brokers.
> > >  // Use -1 for current broker only.
> > >
> > >  BrokerMetadataResponse => [BrokerId Host Port ProtocolVersionMin
> > > 

[jira] [Commented] (KAFKA-2391) Blocking call such as position(), partitionsFor(), committed() and listTopics() should have a timeout

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944640#comment-14944640
 ] 

Ismael Juma commented on KAFKA-2391:


Note that this is related to KAFKA-1894.

> Blocking call such as position(), partitionsFor(), committed() and 
> listTopics() should have a timeout
> -
>
> Key: KAFKA-2391
> URL: https://issues.apache.org/jira/browse/KAFKA-2391
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Onur Karaman
>
> The blocking calls should have a timeout from either configuration or 
> parameter. So far we have position(), partitionsFor(), committed() and 
> listTopics().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka-site pull request: updated contributing.html

2015-10-06 Thread ijuma
Github user ijuma commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/1#discussion_r41236153
  
--- Diff: contributing.html ---
@@ -25,9 +25,8 @@
 
 To submit a change for inclusion please do the following:
 
-
-   Create a patch that applies cleanly against http://svn.apache.org/repos/asf/kafka/site/;>SVN trunk.
-   Open a https://issues.apache.org/jira/browse/KAFKA;>JIRA ticket describing 
the patch and attach your patch to the JIRA. Include in the JIRA information 
about the issue and the approach you took in fixing it (if this isn't obvious). 
Also, mark the jira as "Patch Available" by clicking on the "Submit Patch" 
button (this is done automatically if you use the code review tool below). 
Creating the JIRA will automatically send an email to the developer email list 
alerting us of the new issue.
+
+ Follow the detailed instructions in https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes;>Contributing
 Website Changes.
--- End diff --

Seems like this line is over-indented. Also, I'd probably drop `detailed` 
and just say "Follow the instructions..."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[DISCUSS] Email to dev list for GitHub PR comments

2015-10-06 Thread Ismael Juma
Hi all,

You may have noticed that we receive one email for each comment in
kafka-site pull requests. We don't have that enabled for the kafka (ie
code) repository. Maybe that's OK as the number of emails would be much
higher for the code repository, but I thought it would be good to get other
people's opinions on it.

So, for the code repository, would you prefer if:

1. We leave things as they are (emails to dev list are sent for
opening/closing of PRs and other notifications are handled by one's own
GitHub notification settings)
2. We change it to be like kafka-site and an email is sent to the dev list
for each PR comment
3. Something else

Ismael


[jira] [Commented] (KAFKA-1451) Broker stuck due to leader election race

2015-10-06 Thread XiangChen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944726#comment-14944726
 ] 

XiangChen commented on KAFKA-1451:
--

also hit in 0.8.2.1,and the /controller node in zk is lost.

> Broker stuck due to leader election race 
> -
>
> Key: KAFKA-1451
> URL: https://issues.apache.org/jira/browse/KAFKA-1451
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Maciek Makowski
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1451.patch, KAFKA-1451_2014-07-28_20:27:32.patch, 
> KAFKA-1451_2014-07-29_10:13:23.patch
>
>
> h3. Symptoms
> The broker does not become available due to being stuck in an infinite loop 
> while electing leader. This can be recognised by the following line being 
> repeatedly written to server.log:
> {code}
> [2014-05-14 04:35:09,187] INFO I wrote this conflicted ephemeral node 
> [{"version":1,"brokerid":1,"timestamp":"1400060079108"}] at /controller a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> {code}
> h3. Steps to Reproduce
> In a single kafka 0.8.1.1 node, single zookeeper 3.4.6 (but will likely 
> behave the same with the ZK version included in Kafka distribution) node 
> setup:
> # start both zookeeper and kafka (in any order)
> # stop zookeeper
> # stop kafka
> # start kafka
> # start zookeeper
> h3. Likely Cause
> {{ZookeeperLeaderElector}} subscribes to data changes on startup, and then 
> triggers an election. if the deletion of ephemeral {{/controller}} node 
> associated with previous zookeeper session of the broker happens after 
> subscription to changes in new session, election will be invoked twice, once 
> from {{startup}} and once from {{handleDataDeleted}}:
> * {{startup}}: acquire {{controllerLock}}
> * {{startup}}: subscribe to data changes
> * zookeeper: delete {{/controller}} since the session that created it timed 
> out
> * {{handleDataDeleted}}: {{/controller}} was deleted
> * {{handleDataDeleted}}: wait on {{controllerLock}}
> * {{startup}}: elect -- writes {{/controller}}
> * {{startup}}: release {{controllerLock}}
> * {{handleDataDeleted}}: acquire {{controllerLock}}
> * {{handleDataDeleted}}: elect -- attempts to write {{/controller}} and then 
> gets into infinite loop as a result of conflict
> {{createEphemeralPathExpectConflictHandleZKBug}} assumes that the existing 
> znode was written from different session, which is not true in this case; it 
> was written from the same session. That adds to the confusion.
> h3. Suggested Fix
> In {{ZookeeperLeaderElector.startup}} first run {{elect}} and then subscribe 
> to data changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-10-06 Thread Ismael Juma
Thanks Mani. Regarding the release process changes, a couple of comments:

1. Under "bug-fix releases", you mention "major release directory" a couple
of times. Is this right?
2. "Auto-generate the configuration docs" is mentioned a couple of times,
would it be worth including the command used to do this as well?

Ismael

On Tue, Oct 6, 2015 at 3:37 AM, Manikumar Reddy 
wrote:

> Hi Gwen,
>
> Kafka site is updated to use Git repo. We can now push any site changes to
> git web site repo.
>
> 1) "Contributing website changes" wiki page:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes
>
> 2) "Website update process" added to Release Process wiki page:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Process
>
> 3) PR to update contributing.html:
> https://github.com/apache/kafka-site/pull/1
>
>
> Regards
> Mani
>
> On Sat, Oct 3, 2015 at 9:28 PM, Ismael Juma  wrote:
>
> > On 3 Oct 2015 16:44, "Gwen Shapira"  wrote:
> >
> > > OK, PR 171 is in, and the latest version of the docs is now in docs/
> > > directory of trunk!
> >
> > Awesome. :)
> >
> > > Next steps:
> > > 1. Follow up with infra on our github site
> >
> > Follow-up issue filed:
> > https://issues.apache.org/jira/browse/INFRA-10539. Geoffrey
> > Corey assigned the issue to himself.
> >
> > > 2. Update the docs contribution guide
> > > 3. Update the release guide (since we are releasing docs as part of our
> > > release artifacts)
> > >
> > > Mani, I assume you are on those?
> > > Anything I'm missing?
> >
> > I can't think of anything else at this point.
> >
> > Ismael
> >
>


[GitHub] kafka-site pull request: Add Kafka 0.8.2.2 release to downloads pa...

2015-10-06 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka-site/pull/2

Add Kafka 0.8.2.2 release to downloads page



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka-site 0.8.2.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/2.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2


commit f21c50af2be9679e7f799db27291ab844e0f1730
Author: Ismael Juma 
Date:   2015-10-06T08:13:38Z

Add Kafka 0.8.2.2 release to downloads page




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: Add Kafka 0.8.2.2 release to downloads pa...

2015-10-06 Thread ijuma
Github user ijuma commented on the pull request:

https://github.com/apache/kafka-site/pull/2#issuecomment-145778816
  
cc @junrao @gwenshap


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: updated contributing.html

2015-10-06 Thread ijuma
Github user ijuma commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/1#discussion_r41236041
  
--- Diff: contributing.html ---
@@ -25,9 +25,8 @@
 
 To submit a change for inclusion please do the following:
 
-
-   Create a patch that applies cleanly against http://svn.apache.org/repos/asf/kafka/site/;>SVN trunk.
-   Open a https://issues.apache.org/jira/browse/KAFKA;>JIRA ticket describing 
the patch and attach your patch to the JIRA. Include in the JIRA information 
about the issue and the approach you took in fixing it (if this isn't obvious). 
Also, mark the jira as "Patch Available" by clicking on the "Submit Patch" 
button (this is done automatically if you use the code review tool below). 
Creating the JIRA will automatically send an email to the developer email list 
alerting us of the new issue.
+
+ Follow the detailed instructions in https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes;>Contributing
 Website Changes.
It is our job to follow up on patches in a timely fashion. Nag us if we aren't doing our job 
(sometimes we drop things). If the patch needs improvement, the reviwer will 
mark the jira back to "In Progress" after reviewing.
--- End diff --

Can you please fix the existing typo in this line (`reviwer`)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request: updated contributing.html

2015-10-06 Thread ijuma
Github user ijuma commented on the pull request:

https://github.com/apache/kafka-site/pull/1#issuecomment-145780672
  
In the page, we also have:

```
Note that if the change is related to user-facing protocols / 
interface / configs, etc, you need to make the corresponding change on the 
documentation as well. For wiki page changes feel free to edit the page content 
directly (you may need to contact us to get the permission first if it is your 
first time to edit on wiki); for website doc changes please follow the steps 
below to submit another patch as well, except it can be under the same JIRA as 
the code change and you do not need to create a new JIRA for it.
```

I think we should update that to say that it should be done as part of the 
same PR since the the docs now live under the code repo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2587) Transient test failure: `SimpleAclAuthorizerTest`

2015-10-06 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944700#comment-14944700
 ] 

Flavio Junqueira commented on KAFKA-2587:
-

bq. is it possible that the sequential node doesn't exist yet on the new ZK 
server?

When a client re-connects to a different server, it makes sure that the server 
has seen at least as many updates as the previous one by checking the latest 
zxid of the new server it is connecting to. Consequently, if a znode Z is 
observable to a client while the client is connected to server A and the client 
successfully re-connects to B, then the client must be able to see Z through B.

Assuming that the znodes under /acl_changes aren't ephemerals and haven't been 
deleted, it is not possible that the Authorizer receives a watch  notification 
while connected to server A, re-connects to server B, and doesn't see the 
change that triggered the notification while connected to B. 

> Transient test failure: `SimpleAclAuthorizerTest`
> -
>
> Key: KAFKA-2587
> URL: https://issues.apache.org/jira/browse/KAFKA-2587
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.0.0
>
>
> I've seen `SimpleAclAuthorizerTest ` fail a couple of times since its recent 
> introduction. Here's one such build:
> https://builds.apache.org/job/kafka-trunk-git-pr/576/console
> [~parth.brahmbhatt], can you please take a look and see if it's an easy fix?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2391) Blocking call such as position(), partitionsFor(), committed() and listTopics() should have a timeout

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944642#comment-14944642
 ] 

Ismael Juma commented on KAFKA-2391:


Also note that `NetworkClient.poll` already takes `requestTimeoutMs` into 
account. However, some of the loops check for some condition that won't be 
affected by the request timing out. I also mentioned this example in KAFKA-1894:

{code}
public void awaitMetadataUpdate() {
int version = this.metadata.requestUpdate();
do {
poll(Long.MAX_VALUE);
} while (this.metadata.version() == version);
}
{code}

> Blocking call such as position(), partitionsFor(), committed() and 
> listTopics() should have a timeout
> -
>
> Key: KAFKA-2391
> URL: https://issues.apache.org/jira/browse/KAFKA-2391
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Onur Karaman
>
> The blocking calls should have a timeout from either configuration or 
> parameter. So far we have position(), partitionsFor(), committed() and 
> listTopics().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2428

2015-10-06 Thread MayureshGharat
GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/282

KAFKA-2428

Add sanity test in kafkaConsumer for the timeouts. This is a followup 
ticket for Kafka-2120.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka Kafka-2428

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/282.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #282


commit 7a9bd41c1ba5eb1bce45c51d6227ac61a184bb16
Author: Mayuresh Gharat 
Date:   2015-10-07T01:40:09Z

Add sanity test in kafkaConsumer for the timeouts. This is a followup 
ticket for Kafka-2120.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2428) Add sanity test in kafkaConsumer for the timeouts. This is a followup ticket for Kafka-2120

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946158#comment-14946158
 ] 

ASF GitHub Bot commented on KAFKA-2428:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/282

KAFKA-2428

Add sanity test in kafkaConsumer for the timeouts. This is a followup 
ticket for Kafka-2120.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka Kafka-2428

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/282.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #282


commit 7a9bd41c1ba5eb1bce45c51d6227ac61a184bb16
Author: Mayuresh Gharat 
Date:   2015-10-07T01:40:09Z

Add sanity test in kafkaConsumer for the timeouts. This is a followup 
ticket for Kafka-2120.




> Add sanity test in kafkaConsumer for the timeouts. This is a followup ticket 
> for Kafka-2120
> ---
>
> Key: KAFKA-2428
> URL: https://issues.apache.org/jira/browse/KAFKA-2428
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Fix For: 0.9.0.0
>
>
> The request timeout should be the highest timeout across all the timeout. The 
> rules should be:
> Request timeout > session timeout.
> Request timeout > fetch.max.wait.timeout
> request timeout won't kick in before the other timeout reached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-31 - Move to relative offsets in compressed message sets.

2015-10-06 Thread Dong Lin
+1

Dong

On Tue, Oct 6, 2015 at 2:58 PM, Jiangjie Qin 
wrote:

> Hi folks,
>
> Sorry for this prolonged voting session and thanks for the votes.
>
> There is an additional broker configuration change added to the KIP after
> the vote. We propose to add a message.format.version configuration to the
> broker to indicate which version it should use to store the message on
> disk.
>
> It is mainly trying to minimize the format conversion for consumption
> during rolling out. Because the client upgrade could take some time and it
> can be expensive to give up zero-copy for the majority of the consumers, we
> want to avoid doing that.
>
> I would like to see if people have concerns over this change or not. If
> there is no concerns, I will close the vote as passed. Otherwise I will
> initiate another vote.
>
> Thanks,
>
> Jiangjie (Becket) QIn
>
>
> On Fri, Sep 25, 2015 at 4:41 PM, Ewen Cheslack-Postava 
> wrote:
>
> > +1
> >
> > -Ewen
> >
> > On Fri, Sep 25, 2015 at 11:15 AM, Jun Rao  wrote:
> >
> > > +1. I agree that it's worth thinking through the migration plan a bit
> > more.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Thu, Sep 24, 2015 at 6:14 PM, Joel Koshy 
> wrote:
> > >
> > > > +1 on everything but the upgrade plan, which is a bit scary - will
> > > > comment on the discuss thread.
> > > >
> > > > On Thu, Sep 24, 2015 at 9:51 AM, Mayuresh Gharat
> > > >  wrote:
> > > > > +1
> > > > >
> > > > > On Wed, Sep 23, 2015 at 10:16 PM, Guozhang Wang <
> wangg...@gmail.com>
> > > > wrote:
> > > > >
> > > > >> +1
> > > > >>
> > > > >> On Wed, Sep 23, 2015 at 9:32 PM, Aditya Auradkar <
> > > > >> aaurad...@linkedin.com.invalid> wrote:
> > > > >>
> > > > >> > +1
> > > > >> >
> > > > >> > On Wed, Sep 23, 2015 at 8:03 PM, Neha Narkhede <
> n...@confluent.io
> > >
> > > > >> wrote:
> > > > >> >
> > > > >> > > +1
> > > > >> > >
> > > > >> > > On Wed, Sep 23, 2015 at 6:21 PM, Todd Palino <
> tpal...@gmail.com
> > >
> > > > >> wrote:
> > > > >> > >
> > > > >> > > > +1000
> > > > >> > > >
> > > > >> > > > !
> > > > >> > > >
> > > > >> > > > -Todd
> > > > >> > > >
> > > > >> > > > On Wednesday, September 23, 2015, Jiangjie Qin
> > > > >> >  > > > >> > > >
> > > > >> > > > wrote:
> > > > >> > > >
> > > > >> > > > > Hi,
> > > > >> > > > >
> > > > >> > > > > Thanks a lot for the reviews and feedback on KIP-31. It
> > looks
> > > > all
> > > > >> the
> > > > >> > > > > concerns of the KIP has been addressed. I would like to
> > start
> > > > the
> > > > >> > > voting
> > > > >> > > > > process.
> > > > >> > > > >
> > > > >> > > > > The short summary for the KIP:
> > > > >> > > > > We are going to use the relative offset in the message
> > format
> > > to
> > > > >> > avoid
> > > > >> > > > > server side recompression.
> > > > >> > > > >
> > > > >> > > > > In case you haven't got a chance to check, here is the KIP
> > > link.
> > > > >> > > > >
> > > > >> > > > >
> > > > >> > > >
> > > > >> > >
> > > > >> >
> > > > >>
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-31+-+Move+to+relative+offsets+in+compressed+message+sets
> > > > >> > > > >
> > > > >> > > > > Thanks,
> > > > >> > > > >
> > > > >> > > > > Jiangjie (Becket) Qin
> > > > >> > > > >
> > > > >> > > >
> > > > >> > >
> > > > >> > >
> > > > >> > >
> > > > >> > > --
> > > > >> > > Thanks,
> > > > >> > > Neha
> > > > >> > >
> > > > >> >
> > > > >>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> -- Guozhang
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -Regards,
> > > > > Mayuresh R. Gharat
> > > > > (862) 250-7125
> > > >
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>


Re: [DISCUSS] Email to dev list for GitHub PR comments

2015-10-06 Thread Gwen Shapira
Agree with Guozhang.

On Tue, Oct 6, 2015 at 3:22 PM, Guozhang Wang  wrote:

> I think github cannot batch comments in emails (yet?), which is sad..
>
> I would prefer to keep both github@kafka / github@kafka-site to send only
> open/close PRs unless you subscribe to some tickets.
>
> Guozhang
>
> On Tue, Oct 6, 2015 at 9:49 AM, Jiangjie Qin 
> wrote:
>
> > Hi Ismael,
> >
> > Thanks for bringing this up. Completely agree the exploding amount of
> > emails is a little annoying, regardless they are sent to dev list or
> > personal emails.
> >
> > Not sure whether it is doable or not, but here is what I am thinking.
> > 1. batch the comments email and send periodically to dev list or project
> > subscribers. e.g. 4 hours a day.
> > 2. direct email the PR submitter/reviewers when comments are put.
> >
> > Not sure if github can do that or not. Maybe worth sending email to ask.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> >
> >
> > On Tue, Oct 6, 2015 at 1:35 AM, Ismael Juma  wrote:
> >
> > > Hi all,
> > >
> > > You may have noticed that we receive one email for each comment in
> > > kafka-site pull requests. We don't have that enabled for the kafka (ie
> > > code) repository. Maybe that's OK as the number of emails would be much
> > > higher for the code repository, but I thought it would be good to get
> > other
> > > people's opinions on it.
> > >
> > > So, for the code repository, would you prefer if:
> > >
> > > 1. We leave things as they are (emails to dev list are sent for
> > > opening/closing of PRs and other notifications are handled by one's own
> > > GitHub notification settings)
> > > 2. We change it to be like kafka-site and an email is sent to the dev
> > list
> > > for each PR comment
> > > 3. Something else
> > >
> > > Ismael
> > >
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Updated] (KAFKA-2428) Add sanity test in kafkaConsumer for the timeouts. This is a followup ticket for Kafka-2120

2015-10-06 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2428:
---
Fix Version/s: 0.9.0.0

> Add sanity test in kafkaConsumer for the timeouts. This is a followup ticket 
> for Kafka-2120
> ---
>
> Key: KAFKA-2428
> URL: https://issues.apache.org/jira/browse/KAFKA-2428
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
> Fix For: 0.9.0.0
>
>
> The request timeout should be the highest timeout across all the timeout. The 
> rules should be:
> Request timeout > session timeout.
> Request timeout > fetch.max.wait.timeout
> request timeout won't kick in before the other timeout reached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2203) Get gradle build to work with Java 8

2015-10-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2203:
---
Assignee: Gwen Shapira

> Get gradle build to work with Java 8
> 
>
> Key: KAFKA-2203
> URL: https://issues.apache.org/jira/browse/KAFKA-2203
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.1.1
>Reporter: Gaju Bhat
>Assignee: Gwen Shapira
>Priority: Minor
> Fix For: 0.9.0.0
>
> Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch
>
>
> The gradle build halts because javadoc in java 8 is a lot stricter about 
> valid html.
> It might be worthwhile to special case java 8 as described 
> [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2612:
--

 Summary: Increase the number of retained builds for 
kafka-trunk-git-pr-jdk7
 Key: KAFKA-2612
 URL: https://issues.apache.org/jira/browse/KAFKA-2612
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma


It seems like we only retain 12 and this means that we get a 404 for PRs that 
are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944832#comment-14944832
 ] 

Ismael Juma commented on KAFKA-2612:


[~fpj] or [~guozhang], can any of you help?

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2612:
---
Component/s: build

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #5

2015-10-06 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2474: Add caching of JSON schema conversions to JsonConverter

--
[...truncated 6513 lines...]
org.apache.kafka.copycat.data.SchemaBuilderTest > testBooleanBuilder PASSED

org.apache.kafka.copycat.data.SchemaBuilderTest > testDoubleBuilder PASSED

org.apache.kafka.copycat.connector.ConnectorReconfigurationTest > 
testReconfigureStopException PASSED

org.apache.kafka.copycat.connector.ConnectorReconfigurationTest > 
testDefaultReconfigure PASSED

org.apache.kafka.copycat.data.FieldTest > testEquality PASSED
:copycat:file:checkstyleMain
:copycat:file:compileTestJava
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-module-junit4/1.6.2/powermock-module-junit4-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-modules/1.6.2/powermock-modules-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock/1.6.2/powermock-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-api-easymock/1.6.2/powermock-api-easymock-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-api/1.6.2/powermock-api-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-module-junit4-common/1.6.2/powermock-module-junit4-common-1.6.2.pom
Download 
https://repo1.maven.org/maven2/cglib/cglib-nodep/2.2.2/cglib-nodep-2.2.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-api-support/1.6.2/powermock-api-support-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-core/1.6.2/powermock-core-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-reflect/1.6.2/powermock-reflect-1.6.2.pom
Download 
https://repo1.maven.org/maven2/org/javassist/javassist/3.19.0-GA/javassist-3.19.0-GA.pom
Download https://repo1.maven.org/maven2/junit/junit/4.12/junit-4.12.pom
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-module-junit4/1.6.2/powermock-module-junit4-1.6.2.jar
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-api-easymock/1.6.2/powermock-api-easymock-1.6.2.jar
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-module-junit4-common/1.6.2/powermock-module-junit4-common-1.6.2.jar
Download 
https://repo1.maven.org/maven2/cglib/cglib-nodep/2.2.2/cglib-nodep-2.2.2.jar
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-api-support/1.6.2/powermock-api-support-1.6.2.jar
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-core/1.6.2/powermock-core-1.6.2.jar
Download 
https://repo1.maven.org/maven2/org/powermock/powermock-reflect/1.6.2/powermock-reflect-1.6.2.jar
Download 
https://repo1.maven.org/maven2/org/javassist/javassist/3.19.0-GA/javassist-3.19.0-GA.jar
Download https://repo1.maven.org/maven2/junit/junit/4.12/junit-4.12.jar
warning: [options] bootstrap class path not set in conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning
:copycat:file:processTestResources UP-TO-DATE
:copycat:file:testClasses
:copycat:file:checkstyleTest
:copycat:file:test

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > testSourceTasks 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testSourceTasksStdin PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > testTaskClass 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

org.apache.kafka.copycat.file.FileStreamSinkTaskTest > testPutFlush PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testSinkTasks PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testTaskClass PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testNormalLifecycle 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testMissingTopic PASSED
:copycat:json:checkstyleMain
:copycat:json:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:json:processTestResources UP-TO-DATE
:copycat:json:testClasses
:copycat:json:checkstyleTest
:copycat:json:test

org.apache.kafka.copycat.json.JsonConverterTest > longToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToJsonConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
nullSchemaAndMapNonStringKeysToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 

Build failed in Jenkins: kafka-trunk-jdk7 #662

2015-10-06 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2474: Add caching of JSON schema conversions to JsonConverter

--
[...truncated 320 lines...]
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala UP-TO-DATE
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:javadoc
:kafka-trunk-jdk7:core:javadoc
cache taskArtifacts.bin 
(
 is corrupt. Discarding.
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:277:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:278:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 14 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk7:core:scaladocJar
:kafka-trunk-jdk7:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:clients:javadoc
:kafka-trunk-jdk7:log4j-appender:compileJava
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes
:kafka-trunk-jdk7:log4j-appender:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^

[jira] [Assigned] (KAFKA-2364) Improve documentation for contributing to docs

2015-10-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-2364:
--

Assignee: Ismael Juma

> Improve documentation for contributing to docs
> --
>
> Key: KAFKA-2364
> URL: https://issues.apache.org/jira/browse/KAFKA-2364
> Project: Kafka
>  Issue Type: Task
>Reporter: Aseem Bansal
>Assignee: Ismael Juma
>Priority: Minor
>  Labels: doc
>
> While reading the documentation for kafka 8 I saw some improvements that can 
> be made. But the docs for contributing are not very good at 
> https://github.com/apache/kafka. It just gives me a URL for svn. But I am not 
> sure what to do. Can the README.MD file be improved for contributing to docs?
> I have submitted patches to groovy and grails by sending PRs via github but  
> looking at the comments on PRs submitted to kafak it seems PRs via github are 
> not working for kafka. It would be good to make that work also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2364) Improve documentation for contributing to docs

2015-10-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-2364:
--

Assignee: Manikumar Reddy  (was: Ismael Juma)

The website is now in a Git repository (kafka-site) and the website docs live 
in the code repo under `docs` so that changes to that can be done in the same 
PR as changes to the code.

Manikumar is improving the documentation with these details, see

https://github.com/apache/kafka-site/pull/1

Manikumar, I've assigned the issue to you since you're working on it. I hope 
that's OK. Please close it once the PR to `kafka-site` is merged and the 
website is updated.

> Improve documentation for contributing to docs
> --
>
> Key: KAFKA-2364
> URL: https://issues.apache.org/jira/browse/KAFKA-2364
> Project: Kafka
>  Issue Type: Task
>Reporter: Aseem Bansal
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: doc
>
> While reading the documentation for kafka 8 I saw some improvements that can 
> be made. But the docs for contributing are not very good at 
> https://github.com/apache/kafka. It just gives me a URL for svn. But I am not 
> sure what to do. Can the README.MD file be improved for contributing to docs?
> I have submitted patches to groovy and grails by sending PRs via github but  
> looking at the comments on PRs submitted to kafak it seems PRs via github are 
> not working for kafka. It would be good to make that work also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2364) Improve documentation for contributing to docs

2015-10-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2364:
---
Reviewer: Gwen Shapira

> Improve documentation for contributing to docs
> --
>
> Key: KAFKA-2364
> URL: https://issues.apache.org/jira/browse/KAFKA-2364
> Project: Kafka
>  Issue Type: Task
>Reporter: Aseem Bansal
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: doc
>
> While reading the documentation for kafka 8 I saw some improvements that can 
> be made. But the docs for contributing are not very good at 
> https://github.com/apache/kafka. It just gives me a URL for svn. But I am not 
> sure what to do. Can the README.MD file be improved for contributing to docs?
> I have submitted patches to groovy and grails by sending PRs via github but  
> looking at the comments on PRs submitted to kafak it seems PRs via github are 
> not working for kafka. It would be good to make that work also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Reduce logging level for controller connection...

2015-10-06 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/280

Reduce logging level for controller connection failures from `error` to 
`warn`

Before we switched from `BlockingChannel` to `NetworkClient`, we were
always reporting a successful connection due to the fact that
`BlockingChannel.connect` catches and swallows all exceptions. We
are now reporting failures (which is better), but `error` seems too
noisy (as can be seen in our tests).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
reduce-connection-failure-logging-level

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/280.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #280


commit ab58bf11d128c23413fe30d5a486b73493d5511f
Author: Ismael Juma 
Date:   2015-10-06T11:31:08Z

Reduce logging level for controller connection failures from `error` to 
`warn`

Before we switched from `BlockingChannel` to `NetworkClient`, we were
always reporting a successful connection due to the fact that
`BlockingChannel.connect` catches and swallows all exceptions. We
are now reporting failures (which is better), but `error` seems too
noisy (as can be seen in our tests).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: KAFKA-2364 migrate docs from SVN to git

2015-10-06 Thread Manikumar Reddy
On Tue, Oct 6, 2015 at 1:34 PM, Ismael Juma  wrote:

> Thanks Mani. Regarding the release process changes, a couple of comments:
>
> 1. Under "bug-fix releases", you mention "major release directory" a couple
> of times. Is this right?
>

 hmm..not sure. For bug fix releases like 0.8.2.X, we are referring its
major release docs (0.8.2 release). In that sense, i used "major release
directory". I may be wrong.


> 2. "Auto-generate the configuration docs" is mentioned a couple of times,
> would it be worth including the command used to do this as well?
>

  Yes, updated the wiki page.


>
> Ismael
>
> On Tue, Oct 6, 2015 at 3:37 AM, Manikumar Reddy 
> wrote:
>
> > Hi Gwen,
> >
> > Kafka site is updated to use Git repo. We can now push any site changes
> to
> > git web site repo.
> >
> > 1) "Contributing website changes" wiki page:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes
> >
> > 2) "Website update process" added to Release Process wiki page:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Process
> >
> > 3) PR to update contributing.html:
> > https://github.com/apache/kafka-site/pull/1
> >
> >
> > Regards
> > Mani
> >
> > On Sat, Oct 3, 2015 at 9:28 PM, Ismael Juma  wrote:
> >
> > > On 3 Oct 2015 16:44, "Gwen Shapira"  wrote:
> > >
> > > > OK, PR 171 is in, and the latest version of the docs is now in docs/
> > > > directory of trunk!
> > >
> > > Awesome. :)
> > >
> > > > Next steps:
> > > > 1. Follow up with infra on our github site
> > >
> > > Follow-up issue filed:
> > > https://issues.apache.org/jira/browse/INFRA-10539. Geoffrey
> > > Corey assigned the issue to himself.
> > >
> > > > 2. Update the docs contribution guide
> > > > 3. Update the release guide (since we are releasing docs as part of
> our
> > > > release artifacts)
> > > >
> > > > Mani, I assume you are on those?
> > > > Anything I'm missing?
> > >
> > > I can't think of anything else at this point.
> > >
> > > Ismael
> > >
> >
>


[jira] [Comment Edited] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944990#comment-14944990
 ] 

Flavio Junqueira edited comment on KAFKA-2612 at 10/6/15 1:07 PM:
--

(I said 15 previously but set to 14, so I'm just correcting the value with this 
edit)
 
OK, to play nicely, I've set for the git-pr job the number of days to keep to 
14 and max number of builds to 100. It sounds better to set a threshold, even 
if large.  


was (Author: fpj):
OK, to play nicely, I've set for the git-pr job the number of days to keep to 
15 and max number of builds to 100. It sounds better to set a threshold, even 
if large.  

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2609) SSL renegotiation code paths need more tests

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944965#comment-14944965
 ] 

Ismael Juma commented on KAFKA-2609:


[~harsha_ch], my understanding is the same as yours, we don't need 
renegotiation support for 0.9.0.0. Maybe the thing to do is to turn it off 
using JDK options as [~rsivaram] said and then target this issue to a 
subsequent release. Is that what you had in mind?

> SSL renegotiation code paths need more tests
> 
>
> Key: KAFKA-2609
> URL: https://issues.apache.org/jira/browse/KAFKA-2609
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> If renegotiation is triggered when read interest is off, at the moment it 
> looks like read interest is never turned back on. More unit tests are 
> required to test different renegotiation scenarios since these are much 
> harder to exercise in system tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944966#comment-14944966
 ] 

Flavio Junqueira commented on KAFKA-2612:
-

There is also the option of "Days to keep builds" rather than "Max # of builds 
to keep". I think we should switch to number of days, something like 21 days, 
sounds good? [~ijuma]

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2613) Consider capping `maxParallelForks` for Jenkins builds

2015-10-06 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2613:
--

 Summary: Consider capping `maxParallelForks` for Jenkins builds
 Key: KAFKA-2613
 URL: https://issues.apache.org/jira/browse/KAFKA-2613
 Project: Kafka
  Issue Type: Sub-task
  Components: build
Reporter: Ismael Juma


We currently set `maxParallelForks` to the number returned by 
`Runtime.availableProcessors`.

{code}
  tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
  }
{code}

This returns the number of logical cores (including hyperthreaded cores) in the 
machine.

This is usually OK when running the tests locally, but the Apache Jenkins 
slaves run 2 to 3 jobs simultaneously causing a higher number of timing related 
failures.

A potential solution is to allow `maxParallelForks` to be set via a Gradle 
property and use that property to set it to an appropriate value when the build 
is run from Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2609) SSL renegotiation code paths need more tests

2015-10-06 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944993#comment-14944993
 ] 

Sriharsha Chintalapani commented on KAFKA-2609:
---

Yes [~ijuma]. Even after we get this in I would see this as optional rather 
turn it on by default.

> SSL renegotiation code paths need more tests
> 
>
> Key: KAFKA-2609
> URL: https://issues.apache.org/jira/browse/KAFKA-2609
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> If renegotiation is triggered when read interest is off, at the moment it 
> looks like read interest is never turned back on. More unit tests are 
> required to test different renegotiation scenarios since these are much 
> harder to exercise in system tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2609) SSL renegotiation code paths need more tests

2015-10-06 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944938#comment-14944938
 ] 

Sriharsha Chintalapani commented on KAFKA-2609:
---

[~rsivaram] [~ijuma] Do we need to release this as part of 0.9.0? When ssl 
patch got in we decided to re-visit renegotiation as part of next release. Also 
lets make this optional i.e turned off by default I don't see many users using 
weak crypto.

> SSL renegotiation code paths need more tests
> 
>
> Key: KAFKA-2609
> URL: https://issues.apache.org/jira/browse/KAFKA-2609
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> If renegotiation is triggered when read interest is off, at the moment it 
> looks like read interest is never turned back on. More unit tests are 
> required to test different renegotiation scenarios since these are much 
> harder to exercise in system tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira reassigned KAFKA-2612:
---

Assignee: Flavio Junqueira

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2428) Add sanity test in kafkaConsumer for the timeouts. This is a followup ticket for Kafka-2120

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944904#comment-14944904
 ] 

Ismael Juma commented on KAFKA-2428:


[~mgharat], is this something we need for 0.9.0.0? If so, we should set the fix 
version.

> Add sanity test in kafkaConsumer for the timeouts. This is a followup ticket 
> for Kafka-2120
> ---
>
> Key: KAFKA-2428
> URL: https://issues.apache.org/jira/browse/KAFKA-2428
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> The request timeout should be the highest timeout across all the timeout. The 
> rules should be:
> Request timeout > session timeout.
> Request timeout > fetch.max.wait.timeout
> request timeout won't kick in before the other timeout reached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2455) Test Failure: kafka.consumer.MetricsTest > testMetricsLeak

2015-10-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2455:
---
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> Test Failure: kafka.consumer.MetricsTest > testMetricsLeak 
> ---
>
> Key: KAFKA-2455
> URL: https://issues.apache.org/jira/browse/KAFKA-2455
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>
> I've seen this failure in builds twice recently:
> kafka.consumer.MetricsTest > testMetricsLeak FAILED
> java.lang.AssertionError: expected:<174> but was:<176>
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.failNotEquals(Assert.java:689)
> at org.junit.Assert.assertEquals(Assert.java:127)
> at org.junit.Assert.assertEquals(Assert.java:514)
> at org.junit.Assert.assertEquals(Assert.java:498)
> at 
> kafka.consumer.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:65)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at kafka.consumer.MetricsTest.testMetricsLeak(MetricsTest.scala:63)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944997#comment-14944997
 ] 

Ismael Juma commented on KAFKA-2612:


Thanks!

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KAFKA-2364 migrate docs from SVN to git

2015-10-06 Thread Ismael Juma
Comments below.

On Tue, Oct 6, 2015 at 1:39 PM, Manikumar Reddy 
wrote:

> On Tue, Oct 6, 2015 at 1:34 PM, Ismael Juma  wrote:
>
> > Thanks Mani. Regarding the release process changes, a couple of comments:
> >
> > 1. Under "bug-fix releases", you mention "major release directory" a
> couple
> > of times. Is this right?
> >
>
> hmm..not sure. For bug fix releases like 0.8.2.X, we are referring its
> major release docs (0.8.2 release). In that sense, i used "major release
> directory". I may be wrong.
>

I see what you mean. I actually don't know what is the current process in
that regard, so I'll leave it to Gwen. :)

> 2. "Auto-generate the configuration docs" is mentioned a couple of times,
> > would it be worth including the command used to do this as well?
> >
>
>   Yes, updated the wiki page.
>

Thanks.

Ismael


[jira] [Commented] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944978#comment-14944978
 ] 

Ismael Juma commented on KAFKA-2612:


That sounds good to me [~fpj].

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944990#comment-14944990
 ] 

Flavio Junqueira commented on KAFKA-2612:
-

OK, to play nicely, I've set for the git-pr job the number of days to keep to 
15 and max number of builds to 100. It sounds better to set a threshold, even 
if large.  

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka-site pull request: Add Kafka 0.8.2.2 release to downloads pa...

2015-10-06 Thread gwenshap
Github user gwenshap commented on the pull request:

https://github.com/apache/kafka-site/pull/2#issuecomment-145928533
  
Thank you, @ijuma for the very first Kafka site PR!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2459) Connection backoff/blackout period should start when a connection is disconnected, not when the connection attempt was initiated

2015-10-06 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska reassigned KAFKA-2459:
---

Assignee: Eno Thereska  (was: Manikumar Reddy)

> Connection backoff/blackout period should start when a connection is 
> disconnected, not when the connection attempt was initiated
> 
>
> Key: KAFKA-2459
> URL: https://issues.apache.org/jira/browse/KAFKA-2459
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Eno Thereska
>
> Currently the connection code for new clients marks the time when a 
> connection was initiated (NodeConnectionState.lastConnectMs) and then uses 
> this to compute blackout periods for nodes, during which connections will not 
> be attempted and the node is not considered a candidate for leastLoadedNode.
> However, in cases where the connection attempt takes longer than the 
> blackout/backoff period (default 10ms), this results in incorrect behavior. 
> If a broker is not available and, for example, the broker does not explicitly 
> reject the connection, instead waiting for a connection timeout (e.g. due to 
> firewall settings), then the backoff period will have already elapsed and the 
> node will immediately be considered ready for a new connection attempt and a 
> node to be selected by leastLoadedNode for metadata updates. I think it 
> should be easy to reproduce and verify this problem manually by using tc to 
> introduce enough latency to make connection failures take > 10ms.
> The correct behavior would use the disconnection event to mark the end of the 
> last connection attempt and then wait for the backoff period to elapse after 
> that.
> See 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201508.mbox/%3CCAJY8EofpeU4%2BAJ%3Dw91HDUx2RabjkWoU00Z%3DcQ2wHcQSrbPT4HA%40mail.gmail.com%3E
>  for the original description of the problem.
> This is related to KAFKA-1843 because leastLoadedNode currently will 
> consistently choose the same node if this blackout period is not handled 
> correctly, but is a much smaller issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2452) enable new consumer in mirror maker

2015-10-06 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945370#comment-14945370
 ] 

Jiangjie Qin commented on KAFKA-2452:
-

[~zhoux_samuel] It is expected that no zk path is created in this case because 
consumer information is no longer in zookeeper but in coordinator (Today we do 
not persist the consumer data anywhere but we will persist them).

The way consumer offset checker works is that it first check if there is 
committed offset on the broker and if there isn't any offset, it assumes the 
offset is in zookeeper. Because new consumer does not register themselves in 
zookeeper so it throws the NoNode exception you saw. Remember the offset commit 
will only occur if:
1. There is consumed offset on the consumer. i.e. consumer actually saw some 
messages.
2. Consumer has already committed at least once. The committing interval by 
default is one minute.

Maybe you can try the following:
1. turn on trace level logging on mirror maker.
2. Make sure there are some messages consumed by mirror maker.
3. wait for some time and you should see "Committing offsets" in the log.
4. run the offset checker.

> enable new consumer in mirror maker
> ---
>
> Key: KAFKA-2452
> URL: https://issues.apache.org/jira/browse/KAFKA-2452
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to an an option to enable the new consumer in mirror maker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1451) Broker stuck due to leader election race

2015-10-06 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945382#comment-14945382
 ] 

Jiangjie Qin commented on KAFKA-1451:
-

[~laxpio] May be related to KAFKA-2437.

> Broker stuck due to leader election race 
> -
>
> Key: KAFKA-1451
> URL: https://issues.apache.org/jira/browse/KAFKA-1451
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Maciek Makowski
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1451.patch, KAFKA-1451_2014-07-28_20:27:32.patch, 
> KAFKA-1451_2014-07-29_10:13:23.patch
>
>
> h3. Symptoms
> The broker does not become available due to being stuck in an infinite loop 
> while electing leader. This can be recognised by the following line being 
> repeatedly written to server.log:
> {code}
> [2014-05-14 04:35:09,187] INFO I wrote this conflicted ephemeral node 
> [{"version":1,"brokerid":1,"timestamp":"1400060079108"}] at /controller a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> {code}
> h3. Steps to Reproduce
> In a single kafka 0.8.1.1 node, single zookeeper 3.4.6 (but will likely 
> behave the same with the ZK version included in Kafka distribution) node 
> setup:
> # start both zookeeper and kafka (in any order)
> # stop zookeeper
> # stop kafka
> # start kafka
> # start zookeeper
> h3. Likely Cause
> {{ZookeeperLeaderElector}} subscribes to data changes on startup, and then 
> triggers an election. if the deletion of ephemeral {{/controller}} node 
> associated with previous zookeeper session of the broker happens after 
> subscription to changes in new session, election will be invoked twice, once 
> from {{startup}} and once from {{handleDataDeleted}}:
> * {{startup}}: acquire {{controllerLock}}
> * {{startup}}: subscribe to data changes
> * zookeeper: delete {{/controller}} since the session that created it timed 
> out
> * {{handleDataDeleted}}: {{/controller}} was deleted
> * {{handleDataDeleted}}: wait on {{controllerLock}}
> * {{startup}}: elect -- writes {{/controller}}
> * {{startup}}: release {{controllerLock}}
> * {{handleDataDeleted}}: acquire {{controllerLock}}
> * {{handleDataDeleted}}: elect -- attempts to write {{/controller}} and then 
> gets into infinite loop as a result of conflict
> {{createEphemeralPathExpectConflictHandleZKBug}} assumes that the existing 
> znode was written from different session, which is not true in this case; it 
> was written from the same session. That adds to the confusion.
> h3. Suggested Fix
> In {{ZookeeperLeaderElector.startup}} first run {{elect}} and then subscribe 
> to data changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Email to dev list for GitHub PR comments

2015-10-06 Thread Jiangjie Qin
Hi Ismael,

Thanks for bringing this up. Completely agree the exploding amount of
emails is a little annoying, regardless they are sent to dev list or
personal emails.

Not sure whether it is doable or not, but here is what I am thinking.
1. batch the comments email and send periodically to dev list or project
subscribers. e.g. 4 hours a day.
2. direct email the PR submitter/reviewers when comments are put.

Not sure if github can do that or not. Maybe worth sending email to ask.

Thanks,

Jiangjie (Becket) Qin



On Tue, Oct 6, 2015 at 1:35 AM, Ismael Juma  wrote:

> Hi all,
>
> You may have noticed that we receive one email for each comment in
> kafka-site pull requests. We don't have that enabled for the kafka (ie
> code) repository. Maybe that's OK as the number of emails would be much
> higher for the code repository, but I thought it would be good to get other
> people's opinions on it.
>
> So, for the code repository, would you prefer if:
>
> 1. We leave things as they are (emails to dev list are sent for
> opening/closing of PRs and other notifications are handled by one's own
> GitHub notification settings)
> 2. We change it to be like kafka-site and an email is sent to the dev list
> for each PR comment
> 3. Something else
>
> Ismael
>


[jira] [Commented] (KAFKA-2391) Blocking call such as position(), partitionsFor(), committed() and listTopics() should have a timeout

2015-10-06 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945349#comment-14945349
 ] 

Jiangjie Qin commented on KAFKA-2391:
-

[~ijuma] The problem of using request timeout for the public API blocking calls 
is that request timeout is for exception handling so it is typically very long. 
For users who cares about blocking time, they probably don't want to block for 
that long. Therefore request timeout is not an ideal candidate for blocking 
timeout. That is why we have max.block.ms for producer. For consumers, I think 
we probably want the similar configuration here, so any API call is guaranteed 
to return within max.block.ms.

> Blocking call such as position(), partitionsFor(), committed() and 
> listTopics() should have a timeout
> -
>
> Key: KAFKA-2391
> URL: https://issues.apache.org/jira/browse/KAFKA-2391
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Onur Karaman
>
> The blocking calls should have a timeout from either configuration or 
> parameter. So far we have position(), partitionsFor(), committed() and 
> listTopics().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2612) Increase the number of retained builds for kafka-trunk-git-pr-jdk7

2015-10-06 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira resolved KAFKA-2612.
-
Resolution: Fixed

> Increase the number of retained builds for kafka-trunk-git-pr-jdk7
> --
>
> Key: KAFKA-2612
> URL: https://issues.apache.org/jira/browse/KAFKA-2612
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>
> It seems like we only retain 12 and this means that we get a 404 for PRs that 
> are still active. We need a much higher number, maybe 50?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2476) Define logical types for Copycat data API

2015-10-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2476:
-
Status: Patch Available  (was: Open)

> Define logical types for Copycat data API
> -
>
> Key: KAFKA-2476
> URL: https://issues.apache.org/jira/browse/KAFKA-2476
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> We need some common types like datetime and decimal. This boils down to 
> defining the schemas for these types, along with documenting their semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka-site pull request: Add Kafka 0.8.2.2 release to downloads pa...

2015-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1451) Broker stuck due to leader election race

2015-10-06 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945391#comment-14945391
 ] 

Flavio Junqueira commented on KAFKA-1451:
-

Maybe related to KAFKA-1387?

> Broker stuck due to leader election race 
> -
>
> Key: KAFKA-1451
> URL: https://issues.apache.org/jira/browse/KAFKA-1451
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Maciek Makowski
>Assignee: Manikumar Reddy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1451.patch, KAFKA-1451_2014-07-28_20:27:32.patch, 
> KAFKA-1451_2014-07-29_10:13:23.patch
>
>
> h3. Symptoms
> The broker does not become available due to being stuck in an infinite loop 
> while electing leader. This can be recognised by the following line being 
> repeatedly written to server.log:
> {code}
> [2014-05-14 04:35:09,187] INFO I wrote this conflicted ephemeral node 
> [{"version":1,"brokerid":1,"timestamp":"1400060079108"}] at /controller a 
> while back in a different session, hence I will backoff for this node to be 
> deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> {code}
> h3. Steps to Reproduce
> In a single kafka 0.8.1.1 node, single zookeeper 3.4.6 (but will likely 
> behave the same with the ZK version included in Kafka distribution) node 
> setup:
> # start both zookeeper and kafka (in any order)
> # stop zookeeper
> # stop kafka
> # start kafka
> # start zookeeper
> h3. Likely Cause
> {{ZookeeperLeaderElector}} subscribes to data changes on startup, and then 
> triggers an election. if the deletion of ephemeral {{/controller}} node 
> associated with previous zookeeper session of the broker happens after 
> subscription to changes in new session, election will be invoked twice, once 
> from {{startup}} and once from {{handleDataDeleted}}:
> * {{startup}}: acquire {{controllerLock}}
> * {{startup}}: subscribe to data changes
> * zookeeper: delete {{/controller}} since the session that created it timed 
> out
> * {{handleDataDeleted}}: {{/controller}} was deleted
> * {{handleDataDeleted}}: wait on {{controllerLock}}
> * {{startup}}: elect -- writes {{/controller}}
> * {{startup}}: release {{controllerLock}}
> * {{handleDataDeleted}}: acquire {{controllerLock}}
> * {{handleDataDeleted}}: elect -- attempts to write {{/controller}} and then 
> gets into infinite loop as a result of conflict
> {{createEphemeralPathExpectConflictHandleZKBug}} assumes that the existing 
> znode was written from different session, which is not true in this case; it 
> was written from the same session. That adds to the confusion.
> h3. Suggested Fix
> In {{ZookeeperLeaderElector.startup}} first run {{elect}} and then subscribe 
> to data changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2391) Blocking call such as position(), partitionsFor(), committed() and listTopics() should have a timeout

2015-10-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945385#comment-14945385
 ] 

Ismael Juma commented on KAFKA-2391:


[~becket_qin], that's a fair point. The suggestion to use request timeout was 
not done by me actually, I was simply stating that `poll` already takes that 
into account.

> Blocking call such as position(), partitionsFor(), committed() and 
> listTopics() should have a timeout
> -
>
> Key: KAFKA-2391
> URL: https://issues.apache.org/jira/browse/KAFKA-2391
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Onur Karaman
>
> The blocking calls should have a timeout from either configuration or 
> parameter. So far we have position(), partitionsFor(), committed() and 
> listTopics().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2476) Define logical types for Copycat data API

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945434#comment-14945434
 ] 

ASF GitHub Bot commented on KAFKA-2476:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/281

KAFKA-2476: Add Decimal, Date, and Timestamp logical types.

To support Decimal, this also adds support for schema parameters, which is 
an
extra set of String key value pairs which provide extra information about 
the
schema. For Decimal, this is used to encode the scale parameter, which is 
part
of the schema instead of being passed with every value.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2476-copycat-logical-types

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/281.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #281


commit a97ff2f3d9ccce878d34036a8ce4e6ca35cbe08c
Author: Ewen Cheslack-Postava 
Date:   2015-10-05T23:23:52Z

KAFKA-2476: Add Decimal, Date, and Timestamp logical types.

To support Decimal, this also adds support for schema parameters, which is 
an
extra set of String key value pairs which provide extra information about 
the
schema. For Decimal, this is used to encode the scale parameter, which is 
part
of the schema instead of being passed with every value.




> Define logical types for Copycat data API
> -
>
> Key: KAFKA-2476
> URL: https://issues.apache.org/jira/browse/KAFKA-2476
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> We need some common types like datetime and decimal. This boils down to 
> defining the schemas for these types, along with documenting their semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2476: Add Decimal, Date, and Timestamp l...

2015-10-06 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/281

KAFKA-2476: Add Decimal, Date, and Timestamp logical types.

To support Decimal, this also adds support for schema parameters, which is 
an
extra set of String key value pairs which provide extra information about 
the
schema. For Decimal, this is used to encode the scale parameter, which is 
part
of the schema instead of being passed with every value.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2476-copycat-logical-types

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/281.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #281


commit a97ff2f3d9ccce878d34036a8ce4e6ca35cbe08c
Author: Ewen Cheslack-Postava 
Date:   2015-10-05T23:23:52Z

KAFKA-2476: Add Decimal, Date, and Timestamp logical types.

To support Decimal, this also adds support for schema parameters, which is 
an
extra set of String key value pairs which provide extra information about 
the
schema. For Decimal, this is used to encode the scale parameter, which is 
part
of the schema instead of being passed with every value.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---