[GitHub] kafka pull request: MINOR: Revert 0.10.0 branch to SNAPSHOT per ch...

2016-03-23 Thread gwenshap
Github user gwenshap closed the pull request at:

https://github.com/apache/kafka/pull/1126


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3434:
-
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1123
[https://github.com/apache/kafka/pull/1123]

> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Messages corrupted in kafka

2016-03-23 Thread sunil kalva
 I am using java client and kafka 0.8.2, since events are corrupted in
kafka broker i cant read and replay them again.

On Thu, Mar 24, 2016 at 9:42 AM, Becket Qin  wrote:

> Hi Sunil,
>
> The messages in Kafka has a CRC stored with each of them. When consumer
> receives a message, it will compute the CRC from the message bytes and
> compare it to the stored CRC. If the computed CRC and stored CRC does not
> match, that indicates the message has corrupted. I am not sure in your case
> why the message is corrupted. Corrupted message seems to  be pretty rare
> because the broker actually validate the CRC before it stores the messages
> on to the disk.
>
> Is this problem reproduceable? If so, can you find out the messages that
> are corrupted? Also, are you using the Java clients or some other clients?
>
> Jiangjie (Becket) Qin
>
> On Wed, Mar 23, 2016 at 8:28 PM, sunil kalva 
> wrote:
>
> > can some one help me out here.
> >
> > On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva 
> > wrote:
> >
> > > Hi
> > > I am seeing few messages getting corrupted in kafka, It is not
> happening
> > > frequently and percentage is also very very less (less than 0.1%).
> > >
> > > Basically i am publishing thrift events in byte array format to kafka
> > > topics(with out encoding like base64), and i also see more events than
> i
> > > publish (i confirm this by looking at the offset for that topic).
> > > For example if i publish 100 events and i see 110 as offset for that
> > topic
> > > (since it is in production i could not get exact messages which causing
> > > this problem, and we will only realize this problem when we consume
> > because
> > > our thrift deserialization fails).
> > >
> > > So my question is, is there any magic byte which actually determines
> the
> > > boundary of the message which is same as the byte i am sending or or
> for
> > > any n/w issues messages get chopped and stores as one message to
> multiple
> > > messages on server side ?
> > >
> > > tx
> > > SunilKalva
> > >
> >
>


Re: Messages corrupted in kafka

2016-03-23 Thread Becket Qin
Hi Sunil,

The messages in Kafka has a CRC stored with each of them. When consumer
receives a message, it will compute the CRC from the message bytes and
compare it to the stored CRC. If the computed CRC and stored CRC does not
match, that indicates the message has corrupted. I am not sure in your case
why the message is corrupted. Corrupted message seems to  be pretty rare
because the broker actually validate the CRC before it stores the messages
on to the disk.

Is this problem reproduceable? If so, can you find out the messages that
are corrupted? Also, are you using the Java clients or some other clients?

Jiangjie (Becket) Qin

On Wed, Mar 23, 2016 at 8:28 PM, sunil kalva  wrote:

> can some one help me out here.
>
> On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva 
> wrote:
>
> > Hi
> > I am seeing few messages getting corrupted in kafka, It is not happening
> > frequently and percentage is also very very less (less than 0.1%).
> >
> > Basically i am publishing thrift events in byte array format to kafka
> > topics(with out encoding like base64), and i also see more events than i
> > publish (i confirm this by looking at the offset for that topic).
> > For example if i publish 100 events and i see 110 as offset for that
> topic
> > (since it is in production i could not get exact messages which causing
> > this problem, and we will only realize this problem when we consume
> because
> > our thrift deserialization fails).
> >
> > So my question is, is there any magic byte which actually determines the
> > boundary of the message which is same as the byte i am sending or or for
> > any n/w issues messages get chopped and stores as one message to multiple
> > messages on server side ?
> >
> > tx
> > SunilKalva
> >
>


Re: Messages corrupted in kafka

2016-03-23 Thread sunil kalva
can some one help me out here.

On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva  wrote:

> Hi
> I am seeing few messages getting corrupted in kafka, It is not happening
> frequently and percentage is also very very less (less than 0.1%).
>
> Basically i am publishing thrift events in byte array format to kafka
> topics(with out encoding like base64), and i also see more events than i
> publish (i confirm this by looking at the offset for that topic).
> For example if i publish 100 events and i see 110 as offset for that topic
> (since it is in production i could not get exact messages which causing
> this problem, and we will only realize this problem when we consume because
> our thrift deserialization fails).
>
> So my question is, is there any magic byte which actually determines the
> boundary of the message which is same as the byte i am sending or or for
> any n/w issues messages get chopped and stores as one message to multiple
> messages on server side ?
>
> tx
> SunilKalva
>


[jira] [Updated] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-03-23 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3418:
---
Status: Patch Available  (was: In Progress)

> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> limit the potential for the consumer to fall out of the group (such as 
> tweaking max.poll.records). We should pull this into a separate section (e.g. 
> Jay suggests "Detecting Consumer Failures") and give it a more complete 
> treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209613#comment-15209613
 ] 

ASF GitHub Bot commented on KAFKA-3418:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1129

KAFKA-3418: add javadoc section describing consumer failure detection



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3418

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1129


commit c632ccda0192bc3821a00dbd3108b5a96240acd7
Author: Jason Gustafson 
Date:   2016-03-24T02:13:03Z

KAFKA-3418: add javadoc section describing consumer failure detection




> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> limit the potential for the consumer to fall out of the group (such as 
> tweaking max.poll.records). We should pull this into a separate section (e.g. 
> Jay suggests "Detecting Consumer Failures") and give it a more complete 
> treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3418: add javadoc section describing con...

2016-03-23 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1129

KAFKA-3418: add javadoc section describing consumer failure detection



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3418

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1129


commit c632ccda0192bc3821a00dbd3108b5a96240acd7
Author: Jason Gustafson 
Date:   2016-03-24T02:13:03Z

KAFKA-3418: add javadoc section describing consumer failure detection




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-35: Retrieving protocol version

2016-03-23 Thread Gwen Shapira
We (Jay + me) had some extra information we wanted to see in the KIP before
we are comfortable voting:

* Where does the Java client fits in. Hopefully we can use this KIP to
standardize behavior and guarantees between Java and non-Java clients, so
when we reason about the Java clients, which most Kafka developers are
familiar with, we will make the right decisions for all clients.
* When do we bump the protocol? I think 90% of the issue is not that the
version got bumped but rather that we changed behavior without bumping
versions. For the new VersionRequest to be useful, we need to all know when
to get new versions...
* How do we test / validate - I think our recent experience shows that our
protocol tests and compatibility tests are still inadequate. Having
VersionRequest is useless if we can't validate that Kafka actually
implements the protocol it says it does (and we caught this breaks twice in
the last two weeks)
* Error handling of protocol mismatches

Ashish kindly agreed to think about this and improve the KIP.
We'll resume the vote as soon as he's back :)

Gwen


On Wed, Mar 23, 2016 at 5:55 PM, Dana Powers  wrote:

> speaking of pending KIPs, what's the status on this one?
>
>
> On Fri, Mar 18, 2016 at 9:47 PM, Ashish Singh  wrote:
>
> > Hey Jay,
> >
> > Answers inline.
> >
> > On Fri, Mar 18, 2016 at 10:45 AM, Jay Kreps  wrote:
> >
> > Hey Ashish,
> > >
> > > Couple quick things:
> > >
> > > 1. You list as a rejected alternative "making the documentation the
> > > source of truth for the protocol", but I think what you actually
> > > describe in that section is global versioning, which of those two
> > > things are we voting to reject? I think this is a philosophical point
> > > but an important one...
> > >
> > One of the major differences between Option 3 and other options discussed
> > on KIP is that Option 3 is documentation oriented and it is that what I
> > wanted to capture in the title. I am happy to change it to global
> > versioning.
> >
> >
> > > 2. Can you describe the changes necessary and classes we'd have to
> > > update in the java clients to make use of this feature? What would
> > > that look like? One concern I have is just the complexity necessary to
> > > do the per-connection protocol version check and really handle all the
> > > cases. I assume you've thought through what that looks like, can you
> > > sketch that out for people?
> > >
> > I would imagine any client, even Java client, would follow the steps
> > mentioned here
> > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version#KIP-35-Retrievingprotocolversion-Aclientdeveloperwantstoaddsupportforanewfeature.1
> > >.
> > Below are my thoughts on how java client can maintain api versions
> > supported by various brokers in cluster.
> >
> >1. ClusterConnectionStates can provide info on whether api versions
> have
> >been retrieved for a connection or not.
> >2. NetworkClient.handleConnections can send ApiVersionQueryRequest to
> >newly connected nodes.
> >3. NetworkClient can be enhanced to handle ApiVersionQueryResponse and
> >set ClusterConnectionStates to indicate api versions have been
> retrieved
> >for the node.
> >4. NetworkClient maintains mapping Node -> [(api_key, min_ver,
> >max_ver)], brokerApiVersions, cached.
> >5. NetworkClient.processDisconnection can remove entry for a node from
> >brokerApiVersions cache.
> >6. NetworkClient.canSendRequest can have an added condition on node to
> >have api versions available.
> >
> > With the above changes, at any given point of time NetworkClient will be
> > aware of api versions supported by each of the connected nodes. I am not
> > sure if the above changes are the best way to do it, people are welcome
> to
> > pitch in. Does it help?
> >
> >
> > > -Jay
> > >
> > > On Mon, Mar 14, 2016 at 3:54 PM, Ashish Singh 
> > wrote:
> > > > Hey Guys,
> > > >
> > > > I would like to start voting process for *KIP-35: Retrieving protocol
> > > > version*. The KIP is available here
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version
> > > >.
> > > > Here
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version#KIP-35-Retrievingprotocolversion-SummaryofthechangesproposedaspartofthisKIP
> > > >
> > > > is a brief summary of the KIP.
> > > >
> > > > The vote will run for 72 hours.
> > > >
> > > > --
> > > >
> > > > Regards,
> > > > Ashish
> > >
> > ​
> > --
> >
> > Regards,
> > Ashish
> >
>


[jira] [Commented] (KAFKA-2309) ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209534#comment-15209534
 ] 

ASF GitHub Bot commented on KAFKA-2309:
---

GitHub user auradkar reopened a pull request:

https://github.com/apache/kafka/pull/185

KAFKA-2309; ISR shrink rate not updated on LeaderAndIsr request with shrunk 
ISR

Currently, a LeaderAndIsrRequest does not mark the isrShrinkRate if the 
received ISR is smaller than the existing ISR. This can happen if one of the 
replicas is shut down.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185


commit 090a78284478ba57c655fe7a931e8318641bceae
Author: Aditya Auradkar 
Date:   2015-09-02T01:16:53Z

Fixing KAFKA-2309

commit 7d60319a99feadc0c13fd7639d39b6af786515e5
Author: Aditya Auradkar 
Date:   2015-10-07T01:54:19Z

Addressing Joels comment

commit bd8dc02e28b94422735114275737f3efcf667a8b
Author: Aditya Auradkar 
Date:   2015-10-07T02:00:21Z

addressing comments




> ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR
> ---
>
> Key: KAFKA-2309
> URL: https://issues.apache.org/jira/browse/KAFKA-2309
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
>Priority: Minor
>
> If a broker receives a LeaderAndIsr request with a shrunk ISR (say, when a 
> follower shuts down) it needs to mark the isr shrink rate meter when it 
> updates its ISR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2309; ISR shrink rate not updated on Lea...

2016-03-23 Thread auradkar
GitHub user auradkar reopened a pull request:

https://github.com/apache/kafka/pull/185

KAFKA-2309; ISR shrink rate not updated on LeaderAndIsr request with shrunk 
ISR

Currently, a LeaderAndIsrRequest does not mark the isrShrinkRate if the 
received ISR is smaller than the existing ISR. This can happen if one of the 
replicas is shut down.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185


commit 090a78284478ba57c655fe7a931e8318641bceae
Author: Aditya Auradkar 
Date:   2015-09-02T01:16:53Z

Fixing KAFKA-2309

commit 7d60319a99feadc0c13fd7639d39b6af786515e5
Author: Aditya Auradkar 
Date:   2015-10-07T01:54:19Z

Addressing Joels comment

commit bd8dc02e28b94422735114275737f3efcf667a8b
Author: Aditya Auradkar 
Date:   2015-10-07T02:00:21Z

addressing comments




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-35: Retrieving protocol version

2016-03-23 Thread Dana Powers
speaking of pending KIPs, what's the status on this one?


On Fri, Mar 18, 2016 at 9:47 PM, Ashish Singh  wrote:

> Hey Jay,
>
> Answers inline.
>
> On Fri, Mar 18, 2016 at 10:45 AM, Jay Kreps  wrote:
>
> Hey Ashish,
> >
> > Couple quick things:
> >
> > 1. You list as a rejected alternative "making the documentation the
> > source of truth for the protocol", but I think what you actually
> > describe in that section is global versioning, which of those two
> > things are we voting to reject? I think this is a philosophical point
> > but an important one...
> >
> One of the major differences between Option 3 and other options discussed
> on KIP is that Option 3 is documentation oriented and it is that what I
> wanted to capture in the title. I am happy to change it to global
> versioning.
>
>
> > 2. Can you describe the changes necessary and classes we'd have to
> > update in the java clients to make use of this feature? What would
> > that look like? One concern I have is just the complexity necessary to
> > do the per-connection protocol version check and really handle all the
> > cases. I assume you've thought through what that looks like, can you
> > sketch that out for people?
> >
> I would imagine any client, even Java client, would follow the steps
> mentioned here
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version#KIP-35-Retrievingprotocolversion-Aclientdeveloperwantstoaddsupportforanewfeature.1
> >.
> Below are my thoughts on how java client can maintain api versions
> supported by various brokers in cluster.
>
>1. ClusterConnectionStates can provide info on whether api versions have
>been retrieved for a connection or not.
>2. NetworkClient.handleConnections can send ApiVersionQueryRequest to
>newly connected nodes.
>3. NetworkClient can be enhanced to handle ApiVersionQueryResponse and
>set ClusterConnectionStates to indicate api versions have been retrieved
>for the node.
>4. NetworkClient maintains mapping Node -> [(api_key, min_ver,
>max_ver)], brokerApiVersions, cached.
>5. NetworkClient.processDisconnection can remove entry for a node from
>brokerApiVersions cache.
>6. NetworkClient.canSendRequest can have an added condition on node to
>have api versions available.
>
> With the above changes, at any given point of time NetworkClient will be
> aware of api versions supported by each of the connected nodes. I am not
> sure if the above changes are the best way to do it, people are welcome to
> pitch in. Does it help?
>
>
> > -Jay
> >
> > On Mon, Mar 14, 2016 at 3:54 PM, Ashish Singh 
> wrote:
> > > Hey Guys,
> > >
> > > I would like to start voting process for *KIP-35: Retrieving protocol
> > > version*. The KIP is available here
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version
> > >.
> > > Here
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version#KIP-35-Retrievingprotocolversion-SummaryofthechangesproposedaspartofthisKIP
> > >
> > > is a brief summary of the KIP.
> > >
> > > The vote will run for 72 hours.
> > >
> > > --
> > >
> > > Regards,
> > > Ashish
> >
> ​
> --
>
> Regards,
> Ashish
>


[jira] [Updated] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3434:
---
Reviewer: Ewen Cheslack-Postava  (was: Jun Rao)

> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2370) Add pause/unpause connector support

2016-03-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2370:
---
Priority: Critical  (was: Blocker)

> Add pause/unpause connector support
> ---
>
> Key: KAFKA-2370
> URL: https://issues.apache.org/jira/browse/KAFKA-2370
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> It will sometimes be useful to pause/unpause connectors. For example, if you 
> know planned maintenance will occur on the source/destination system, it 
> would make sense to pause and then resume (but not delete and then restore), 
> a connector.
> This likely requires support in all Coordinator implementations 
> (standalone/distributed) to trigger the events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2370) Add pause/unpause connector support

2016-03-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2370:
---
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0

> Add pause/unpause connector support
> ---
>
> Key: KAFKA-2370
> URL: https://issues.apache.org/jira/browse/KAFKA-2370
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> It will sometimes be useful to pause/unpause connectors. For example, if you 
> know planned maintenance will occur on the source/destination system, it 
> would make sense to pause and then resume (but not delete and then restore), 
> a connector.
> This likely requires support in all Coordinator implementations 
> (standalone/distributed) to trigger the events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #477

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: remove streams-smoke-test.sh

--
[...truncated 3797 lines...]

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionDefault PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueExceptions PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionInheritance PASSED

org.apache.kafka.common.protocol.ErrorsTest > testNoneException PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueErrorCodes PASSED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionsAreNotGeneric PASSED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdLow PASSED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdHigh PASSED

org.apache.kafka.common.protocol.ProtoUtilsTest > schemaVersionOutOfRange PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadStringSizeTooLarge PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testNullableDefault PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeStringSize PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadArraySizeTooLarge PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeBytesSize PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadBytesSizeTooLarge PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testSimple 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeArraySize PASSED

org.apache.kafka.common.requests.RequestResponseTest > testSerialization PASSED

org.apache.kafka.common.requests.RequestResponseTest > fetchResponseVersionTest 
PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
produceResponseVersionTest PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
testControlledShutdownResponse PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
testRequestHeaderWithNullClientId PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.metrics.MetricsTest > testSimpleStats PASSED

org.apache.kafka.common.metrics.MetricsTest > testOldDataHasNoEffect PASSED

org.apache.kafka.common.metrics.MetricsTest > testQuotasEquality PASSED

org.apache.kafka.common.metrics.MetricsTest > testRemoveInactiveMetrics PASSED

org.apache.kafka.common.metrics.MetricsTest > testMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testRateWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testTimeWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testEventWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testRemoveMetric PASSED

org.apache.kafka.common.metrics.MetricsTest > testBadSensorHierarchy PASSED

org.apache.kafka.common.metrics.MetricsTest > testRemoveSensor PASSED

org.apache.kafka.common.metrics.MetricsTest > testPercentiles PASSED

org.apache.kafka.common.metrics.MetricsTest > testDuplicateMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testQuotas PASSED

org.apache.kafka.common.metrics.MetricsTest > testHierarchicalSensors PASSED

org.apache.kafka.common.metrics.JmxReporterTest > testJmxRegistration PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testHistogram PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testConstantBinScheme 
PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testLinearBinScheme PASSED

org.apache.kafka.common.utils.CrcTest > testUpdateInt PASSED

org.apache.kafka.common.utils.CrcTest > testUpdate PASSED

org.apache.kafka.common.utils.UtilsTest > testAbs PASSED


[jira] [Commented] (KAFKA-2309) ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209490#comment-15209490
 ] 

ASF GitHub Bot commented on KAFKA-2309:
---

Github user auradkar closed the pull request at:

https://github.com/apache/kafka/pull/185


> ISR shrink rate not updated on LeaderAndIsr request with shrunk ISR
> ---
>
> Key: KAFKA-2309
> URL: https://issues.apache.org/jira/browse/KAFKA-2309
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
>Priority: Minor
>
> If a broker receives a LeaderAndIsr request with a shrunk ISR (say, when a 
> follower shuts down) it needs to mark the isr shrink rate meter when it 
> updates its ISR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2309; ISR shrink rate not updated on Lea...

2016-03-23 Thread auradkar
Github user auradkar closed the pull request at:

https://github.com/apache/kafka/pull/185


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3427) broker can return incorrect version of fetch response when the broker hits an unknown exception

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209485#comment-15209485
 ] 

ASF GitHub Bot commented on KAFKA-3427:
---

GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/1128

KAFKA-3427 - Broker should return correct version of FetchResponse on 
exception

Merging the fix from: https://issues.apache.org/jira/browse/KAFKA-3427
The original version of the code, returned a response using V0 of the 
response protocol. This caused clients to break because they expected the 
throttle_time_ms field to be present.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka k-34

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1128.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1128


commit e1e3cf7ce20e1f0b58e602b48d8b974a64f60cd8
Author: Aditya Auradkar 
Date:   2016-03-24T00:15:56Z

KAFKA-3427: Broker should return correct version of fetch response on 
exception
Merging fix from trunk.




> broker can return incorrect version of fetch response when the broker hits an 
> unknown exception
> ---
>
> Key: KAFKA-3427
> URL: https://issues.apache.org/jira/browse/KAFKA-3427
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> In FetchResponse.handleError(), we generate FetchResponse like the following, 
> which always defaults to version 0 of the response. 
> FetchResponse(correlationId, fetchResponsePartitionData)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3427 - Broker should return correct vers...

2016-03-23 Thread auradkar
GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/1128

KAFKA-3427 - Broker should return correct version of FetchResponse on 
exception

Merging the fix from: https://issues.apache.org/jira/browse/KAFKA-3427
The original version of the code, returned a response using V0 of the 
response protocol. This caused clients to break because they expected the 
throttle_time_ms field to be present.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka k-34

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1128.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1128


commit e1e3cf7ce20e1f0b58e602b48d8b974a64f60cd8
Author: Aditya Auradkar 
Date:   2016-03-24T00:15:56Z

KAFKA-3427: Broker should return correct version of fetch response on 
exception
Merging fix from trunk.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-0.10.0-jdk7 #12

2016-03-23 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-3453) Transient test failures due to MiniKDC port allocation strategy

2016-03-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3453:
---
Fix Version/s: 0.10.0.1

> Transient test failures due to MiniKDC port allocation strategy
> ---
>
> Key: KAFKA-3453
> URL: https://issues.apache.org/jira/browse/KAFKA-3453
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ismael Juma
> Fix For: 0.10.0.1
>
>
> A number of tests, especially our consumer tests, fail transiently because 
> MiniKDC allocates ports by creating a socket, getting its port, then closing 
> it. As previously addressed in our own code, this causes problems because 
> that port can be reallocated before the process has a chance to bind a new 
> socket -- whether due to another test running in parallel or another process 
> simply binding the port first. This results in errors like this in the tests:
> {quote}
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> This is an ongoing issue that Confluent sees in its Jenkins builds, which is 
> the reason for this ticket. The real issue is actually in MiniKDC (we pass in 
> "0" for the port, but then it uses this other port allocation strategy), but 
> we either need to a) figure out a workaround or b) get a fix in upstream and 
> then update to a newer MiniKDC version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3453) Transient test failures due to MiniKDC port allocation strategy

2016-03-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209443#comment-15209443
 ] 

Ismael Juma commented on KAFKA-3453:


Plan is to update to MiniKDC 2.8.0 when it is released. Assigned issue to 
myself so that I don't forget.

> Transient test failures due to MiniKDC port allocation strategy
> ---
>
> Key: KAFKA-3453
> URL: https://issues.apache.org/jira/browse/KAFKA-3453
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ismael Juma
>
> A number of tests, especially our consumer tests, fail transiently because 
> MiniKDC allocates ports by creating a socket, getting its port, then closing 
> it. As previously addressed in our own code, this causes problems because 
> that port can be reallocated before the process has a chance to bind a new 
> socket -- whether due to another test running in parallel or another process 
> simply binding the port first. This results in errors like this in the tests:
> {quote}
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> This is an ongoing issue that Confluent sees in its Jenkins builds, which is 
> the reason for this ticket. The real issue is actually in MiniKDC (we pass in 
> "0" for the port, but then it uses this other port allocation strategy), but 
> we either need to a) figure out a workaround or b) get a fix in upstream and 
> then update to a newer MiniKDC version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3453) Transient test failures due to MiniKDC port allocation strategy

2016-03-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3453:
--

Assignee: Ismael Juma

> Transient test failures due to MiniKDC port allocation strategy
> ---
>
> Key: KAFKA-3453
> URL: https://issues.apache.org/jira/browse/KAFKA-3453
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ismael Juma
>
> A number of tests, especially our consumer tests, fail transiently because 
> MiniKDC allocates ports by creating a socket, getting its port, then closing 
> it. As previously addressed in our own code, this causes problems because 
> that port can be reallocated before the process has a chance to bind a new 
> socket -- whether due to another test running in parallel or another process 
> simply binding the port first. This results in errors like this in the tests:
> {quote}
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> This is an ongoing issue that Confluent sees in its Jenkins builds, which is 
> the reason for this ticket. The real issue is actually in MiniKDC (we pass in 
> "0" for the port, but then it uses this other port allocation strategy), but 
> we either need to a) figure out a workaround or b) get a fix in upstream and 
> then update to a newer MiniKDC version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1146

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: remove streams-smoke-test.sh

--
[...truncated 3089 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED


[jira] [Commented] (KAFKA-3453) Transient test failures due to MiniKDC port allocation strategy

2016-03-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209404#comment-15209404
 ] 

Ismael Juma commented on KAFKA-3453:


MiniKDC has recently received a fix for something similar to what is described 
here:

https://github.com/apache/hadoop/commit/8fb70a031b323634ddc51ff6aff4f376baef68c8

> Transient test failures due to MiniKDC port allocation strategy
> ---
>
> Key: KAFKA-3453
> URL: https://issues.apache.org/jira/browse/KAFKA-3453
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>
> A number of tests, especially our consumer tests, fail transiently because 
> MiniKDC allocates ports by creating a socket, getting its port, then closing 
> it. As previously addressed in our own code, this causes problems because 
> that port can be reallocated before the process has a chance to bind a new 
> socket -- whether due to another test running in parallel or another process 
> simply binding the port first. This results in errors like this in the tests:
> {quote}
> java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> This is an ongoing issue that Confluent sees in its Jenkins builds, which is 
> the reason for this ticket. The real issue is actually in MiniKDC (we pass in 
> "0" for the port, but then it uses this other port allocation strategy), but 
> we either need to a) figure out a workaround or b) get a fix in upstream and 
> then update to a newer MiniKDC version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3454) Add Kafka Streams section in documentation

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209383#comment-15209383
 ] 

ASF GitHub Bot commented on KAFKA-3454:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1127

[WIP] KAFKA-3454: add Kafka Streams web docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KStreamsDocs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1127.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1127


commit 479d5b1a1893b2c96ca6ad313d637c245a86eac1
Author: Guozhang Wang 
Date:   2016-03-23T21:56:31Z

add streams section

commit 29232d233b6c4c809a810e61c9f237ce9982
Author: Guozhang Wang 
Date:   2016-03-23T23:06:36Z

add quickstart for Kafka Streams




> Add Kafka Streams section in documentation
> --
>
> Key: KAFKA-3454
> URL: https://issues.apache.org/jira/browse/KAFKA-3454
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> We need to add the Kafka Streams section in documents since the 0.10.0.0 
> release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: [WIP] KAFKA-3454: add Kafka Streams web docs

2016-03-23 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1127

[WIP] KAFKA-3454: add Kafka Streams web docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KStreamsDocs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1127.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1127


commit 479d5b1a1893b2c96ca6ad313d637c245a86eac1
Author: Guozhang Wang 
Date:   2016-03-23T21:56:31Z

add streams section

commit 29232d233b6c4c809a810e61c9f237ce9982
Author: Guozhang Wang 
Date:   2016-03-23T23:06:36Z

add quickstart for Kafka Streams




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-43: Kafka SASL enhancements

2016-03-23 Thread Gwen Shapira
Sorry! Got distracted by the impending release!

+1 on the current revision of the KIP.

On Wed, Mar 23, 2016 at 3:33 PM, Harsha  wrote:

> Any update on this. Gwen since the KIP is adjusted to address the
> pluggable classes we should make a move on this.
>
> Rajini,
>Can you restart the voting thread.
>
> Thanks,
> Harsha
>
> On Wed, Mar 16, 2016, at 06:42 AM, Rajini Sivaram wrote:
> > As discussed in the KIP meeting yesterday, the scope of KIP-43 has been
> > reduced so that it can be integrated into 0.10.0.0. The updated KIP is
> > here:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> > .
> >
> > Can we continue the vote on the updated KIP?
> >
> > Thank you,
> >
> > Rajini
> >
> > On Thu, Mar 10, 2016 at 2:09 AM, Gwen Shapira  wrote:
> >
> > > Harsha,
> > >
> > > Since you are clearly in favor of the KIP, do you mind jumping into
> > > the discussion thread and help me understand the decision behind the
> > > configuration parameters only allowing a single Login and
> > > CallbackHandler class? This seems too limiting to me, and while Rajini
> > > is trying hard to convince me otherwise, I remain doubtful. Perhaps
> > > (since we have similar experience with Hadoop), you can help me see
> > > what I am missing.
> > >
> > > Gwen
> > >
> > > On Wed, Mar 9, 2016 at 12:02 PM, Harsha  wrote:
> > > > +1 (binding)
> > > >
> > > > On Tue, Mar 8, 2016, at 02:37 AM, tao xiao wrote:
> > > >> +1 (non-binding)
> > > >>
> > > >> On Tue, 8 Mar 2016 at 05:33 Andrew Schofield <
> > > >> andrew_schofield_j...@outlook.com> wrote:
> > > >>
> > > >> > +1 (non-binding)
> > > >> >
> > > >> > 
> > > >> > > From: ism...@juma.me.uk
> > > >> > > Date: Mon, 7 Mar 2016 19:52:11 +
> > > >> > > Subject: Re: [VOTE] KIP-43: Kafka SASL enhancements
> > > >> > > To: dev@kafka.apache.org
> > > >> > >
> > > >> > > +1 (non-binding)
> > > >> > >
> > > >> > > On Thu, Mar 3, 2016 at 10:37 AM, Rajini Sivaram <
> > > >> > > rajinisiva...@googlemail.com> wrote:
> > > >> > >
> > > >> > >> I would like to start the voting process for *KIP-43: Kafka
> SASL
> > > >> > >> enhancements*. This KIP extends the SASL implementation in
> Kafka to
> > > >> > support
> > > >> > >> new SASL mechanisms to enable Kafka to be integrated with
> different
> > > >> > >> authentication servers.
> > > >> > >>
> > > >> > >> The KIP is available here for reference:
> > > >> > >>
> > > >> > >>
> > > >> >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43:+Kafka+SASL+enhancements
> > > >> > >>
> > > >> > >> And here's is a link to the discussion on the mailing list:
> > > >> > >>
> > > >> > >>
> > > >> >
> > >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201601.mbox/%3CCAOJcB39b9Vy7%3DZEM3tLw2zarCS4A_s-%2BU%2BC%3DuEcWs0712UaYrQ%40mail.gmail.com%3E
> > > >> > >>
> > > >> > >>
> > > >> > >> Thank you...
> > > >> > >>
> > > >> > >> Regards,
> > > >> > >>
> > > >> > >> Rajini
> > > >> > >>
> > > >> >
> > >
> >
> >
> >
> > --
> > Regards,
> >
> > Rajini
>


Re: Fallout from upgrading to kafka 0.9.0.0 from 0.8.2.1

2016-03-23 Thread Qi Xu
More information about the issue:
When the issue happens, the controller is always on the 0.9 version Kafka
broker.
In server.log of other brokers, we can see this kind of error:
[2016-03-23 22:36:02,814] ERROR [ReplicaFetcherThread-0-5], Error for
partition [topic,208] to broker
5:org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition.
(kafka.server.ReplicaFetcherThread)

And after restart that controller, everything works again.


On Tue, Mar 22, 2016 at 6:14 PM, Qi Xu  wrote:

> Hi folks, Rajiv, Jun,
> I'd like to bring up this thread again from Rajiv Kurian 3 months ago.
> Basically we did the same thing as Rajiv did. I upgraded two machines (out
> of 10) from 0.8.2.1 to 0.9. SO after the upgrade, there will be 2 machines
> in 0.9 and 8 machines in 0.8.2.1. And initially it all works fine. But
> after about 2 hours, all old uploaders and consumers are broken due to no
> leader found for all partitions of all topics. The producer just complains
> "unknown error for topic xxx when it tries to refresh the metadata". And in
> server side there's some error complaining no leader for a partition.
> I'm wondering is there any known issue about 0.9 and 0.8.2 co-existing
> version in the same cluster? Thanks a lot.
>
>
> Below is the original thread:
>
> We had to revert to 0.8.3 because three of our topics seem to have gotten
> corrupted during the upgrade. As soon as we did the upgrade producers to
> the three topics I mentioned stopped being able to do writes. The clients
> complained (occasionally) about leader not found exceptions. We restarted
> our clients and brokers but that didn't seem to help. Actually even after
> reverting to 0.8.3 these three topics were broken. To fix it we had to stop
> all clients, delete the topics, create them again and then restart the
> clients.
>
> I realize this is not a lot of info. I couldn't wait to get more debug info
> because the cluster was actually being used. Has any one run into something
> like this? Are there any known issues with old consumers/producers. The
> topics that got busted had clients writing to them using the old Java
> wrapper over the Scala producer.
>
> Here are the steps I took to upgrade.
>
> For each broker:
>
> 1. Stop the broker.
> 2. Restart with the *0.9* broker running with
> inter.broker.protocol.version=*0.8.2*.X
> 3. Wait for under replicated partitions to go down to 0.
> 4. Go to step 1.
> Once all the brokers were running the *0.9* code with
> inter.broker.protocol.version=*0.8.2*.X we restarted them one by one with
> inter.broker.protocol.version=0.9.0.0
>
> When reverting I did the following.
>
> For each broker.
>
> 1. Stop the broker.
> 2. Restart with the *0.9* broker running with
> inter.broker.protocol.version=*0.8.2*.X
> 3. Wait for under replicated partitions to go down to 0.
> 4. Go to step 1.
>
> Once all the brokers were running *0.9* code with
> inter.broker.protocol.version=*0.8.2*.X  I restarted them one by one with
> the
> 0.8.2.3 broker code. This however like I mentioned did not fix the three
> broken topics.
>


Build failed in Jenkins: kafka-trunk-jdk7 #1145

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: fix NPE in changelogger

--
[...truncated 5242 lines...]
org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > testKTable 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testValueGetter PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > testNumProcesses 
PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamBranchTest > 
testKStreamBranch PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testAggBasic PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testJoin PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testStateStore 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.KeyValueTest > testHashcode PASSED

org.apache.kafka.streams.KeyValueTest > testEquals PASSED

org.apache.kafka.streams.state.internals.OffsetCheckpointTest > testReadWrite 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchBefore PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testInitialLoading PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRolling 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testSegmentMaintenance PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutSameKeyTimestamp PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchAfter PASSED

org.apache.kafka.streams.state.internals.StoreChangeLoggerTest > testRaw PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #476

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3432; Cluster.update() thread-safety

[wangguoz] HOTFIX: fix NPE in changelogger

--
[...truncated 4389 lines...]

kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED

kafka.admin.AclCommandTest > testAclCli PASSED

kafka.admin.AclCommandTest > testProducerConsumerCli PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > 

Build failed in Jenkins: kafka-0.10.0-jdk7 #11

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: fix NPE in changelogger

[wangguoz] MINOR: remove streams-smoke-test.sh

--
[...truncated 1557 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > 

Re: [VOTE] KIP-43: Kafka SASL enhancements

2016-03-23 Thread Harsha
Any update on this. Gwen since the KIP is adjusted to address the
pluggable classes we should make a move on this.

Rajini,
   Can you restart the voting thread.

Thanks,
Harsha

On Wed, Mar 16, 2016, at 06:42 AM, Rajini Sivaram wrote:
> As discussed in the KIP meeting yesterday, the scope of KIP-43 has been
> reduced so that it can be integrated into 0.10.0.0. The updated KIP is
> here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> .
> 
> Can we continue the vote on the updated KIP?
> 
> Thank you,
> 
> Rajini
> 
> On Thu, Mar 10, 2016 at 2:09 AM, Gwen Shapira  wrote:
> 
> > Harsha,
> >
> > Since you are clearly in favor of the KIP, do you mind jumping into
> > the discussion thread and help me understand the decision behind the
> > configuration parameters only allowing a single Login and
> > CallbackHandler class? This seems too limiting to me, and while Rajini
> > is trying hard to convince me otherwise, I remain doubtful. Perhaps
> > (since we have similar experience with Hadoop), you can help me see
> > what I am missing.
> >
> > Gwen
> >
> > On Wed, Mar 9, 2016 at 12:02 PM, Harsha  wrote:
> > > +1 (binding)
> > >
> > > On Tue, Mar 8, 2016, at 02:37 AM, tao xiao wrote:
> > >> +1 (non-binding)
> > >>
> > >> On Tue, 8 Mar 2016 at 05:33 Andrew Schofield <
> > >> andrew_schofield_j...@outlook.com> wrote:
> > >>
> > >> > +1 (non-binding)
> > >> >
> > >> > 
> > >> > > From: ism...@juma.me.uk
> > >> > > Date: Mon, 7 Mar 2016 19:52:11 +
> > >> > > Subject: Re: [VOTE] KIP-43: Kafka SASL enhancements
> > >> > > To: dev@kafka.apache.org
> > >> > >
> > >> > > +1 (non-binding)
> > >> > >
> > >> > > On Thu, Mar 3, 2016 at 10:37 AM, Rajini Sivaram <
> > >> > > rajinisiva...@googlemail.com> wrote:
> > >> > >
> > >> > >> I would like to start the voting process for *KIP-43: Kafka SASL
> > >> > >> enhancements*. This KIP extends the SASL implementation in Kafka to
> > >> > support
> > >> > >> new SASL mechanisms to enable Kafka to be integrated with different
> > >> > >> authentication servers.
> > >> > >>
> > >> > >> The KIP is available here for reference:
> > >> > >>
> > >> > >>
> > >> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-43:+Kafka+SASL+enhancements
> > >> > >>
> > >> > >> And here's is a link to the discussion on the mailing list:
> > >> > >>
> > >> > >>
> > >> >
> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201601.mbox/%3CCAOJcB39b9Vy7%3DZEM3tLw2zarCS4A_s-%2BU%2BC%3DuEcWs0712UaYrQ%40mail.gmail.com%3E
> > >> > >>
> > >> > >>
> > >> > >> Thank you...
> > >> > >>
> > >> > >> Regards,
> > >> > >>
> > >> > >> Rajini
> > >> > >>
> > >> >
> >
> 
> 
> 
> -- 
> Regards,
> 
> Rajini


[GitHub] kafka pull request: MINOR: Revert 0.10.0 branch to SNAPSHOT per ch...

2016-03-23 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1126

MINOR: Revert 0.10.0 branch to SNAPSHOT per change in release process



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka minor-release-version

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1126.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1126


commit 6abe7a0bee27927b4bceb16c91443140670a9d88
Author: Gwen Shapira 
Date:   2016-03-23T22:33:14Z

MINOR: Revert 0.10.0 branch to SNAPSHOT per change in release process




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-0.10.0-jdk7 #10

2016-03-23 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #475

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3441: 0.10.0 documentation still says "0.9.0"

--
[...truncated 5274 lines...]
org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testLockStateDirectory PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssginmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment PASSED


[GitHub] kafka pull request: MINOR: remove streams-smoke-test.sh

2016-03-23 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/1125

MINOR: remove streams-smoke-test.sh

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka remove_smoketest_shell_script

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1125.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1125


commit ec89c471bbb818b09093bad0639cf316a2a8e89d
Author: Yasuhiro Matsuda 
Date:   2016-03-23T21:46:37Z

MINOR: remove streams-smoke-test.sh




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: fix NPE in changelogger

2016-03-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1124


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: fix NPE in changelogger

2016-03-23 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/1124

HOTFIX: fix NPE in changelogger

Fix NPE in StoreChangeLogger caused by a record out of window retention 
period.
@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka logger_npe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1124.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1124


commit 3fc5093a7d236b0ecba008dfe890bee300c33e51
Author: Yasuhiro Matsuda 
Date:   2016-03-23T21:17:53Z

HOTFIX: fix NPE in changelogger




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Work started] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-23 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3434 started by Jason Gustafson.
--
> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-23 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3434:
---
Status: Patch Available  (was: In Progress)

> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-03-23 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3418 started by Jason Gustafson.
--
> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> limit the potential for the consumer to fall out of the group (such as 
> tweaking max.poll.records). We should pull this into a separate section (e.g. 
> Jay suggests "Detecting Consumer Failures") and give it a more complete 
> treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-03-23 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3418:
---
Description: There still seems to be a lot of confusion about the design of 
the poll() loop in regard to consumer liveness. We do mention it in the 
javadoc, but it's a little hidden and we aren't very clear on what the user 
should do to limit the potential for the consumer to fall out of the group 
(such as tweaking max.poll.records). We should pull this into a separate 
section (e.g. Jay suggests "Detecting Consumer Failures") and give it a more 
complete treatment.  (was: There still seems to be a lot of confusion about the 
design of the poll() loop in regard to consumer liveness. We do mention it in 
the javadoc, but it's a little hidden and we aren't very clear on what the user 
should do to (such as tweaking max.poll.records). We should pull this into a 
separate section (e.g. Jay suggests "Detecting Consumer Failures") and give it 
a more complete treatment.)

> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> limit the potential for the consumer to fall out of the group (such as 
> tweaking max.poll.records). We should pull this into a separate section (e.g. 
> Jay suggests "Detecting Consumer Failures") and give it a more complete 
> treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1144

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3441: 0.10.0 documentation still says "0.9.0"

[wangguoz] KAFKA-3432; Cluster.update() thread-safety

--
[...truncated 1769 lines...]
jdk1.7.0_51/jre/lib/amd64/libjfxwebkit.so
jdk1.7.0_51/jre/lib/amd64/libnpt.so
jdk1.7.0_51/jre/lib/amd64/libjavafx-font.so
jdk1.7.0_51/jre/lib/amd64/libawt.so
jdk1.7.0_51/jre/lib/amd64/libprism-es2.so
jdk1.7.0_51/jre/lib/amd64/libsplashscreen.so
jdk1.7.0_51/jre/lib/amd64/libj2pcsc.so
jdk1.7.0_51/jre/lib/amd64/libmlib_image.so
jdk1.7.0_51/jre/lib/amd64/libj2pkcs11.so
jdk1.7.0_51/jre/lib/amd64/libsctp.so
jdk1.7.0_51/jre/lib/amd64/libdt_socket.so
jdk1.7.0_51/jre/lib/amd64/libjavafx-iio.so
jdk1.7.0_51/jre/lib/amd64/libjavaplugin_jni.so
jdk1.7.0_51/jre/lib/amd64/libgstplugins-lite.so
jdk1.7.0_51/jre/lib/amd64/libsunec.so
jdk1.7.0_51/jre/lib/amd64/libnpjp2.so
jdk1.7.0_51/jre/lib/amd64/libdeploy.so
jdk1.7.0_51/jre/lib/amd64/libjava_crw_demo.so
jdk1.7.0_51/jre/lib/amd64/libunpack.so
jdk1.7.0_51/jre/lib/amd64/libjfr.so
jdk1.7.0_51/jre/lib/amd64/libj2gss.so
jdk1.7.0_51/jre/lib/amd64/libt2k.so
jdk1.7.0_51/jre/lib/amd64/libverify.so
jdk1.7.0_51/jre/lib/amd64/libdcpr.so
jdk1.7.0_51/jre/lib/amd64/libjava.so
jdk1.7.0_51/jre/lib/amd64/libJdbcOdbc.so
jdk1.7.0_51/jre/lib/amd64/fxavcodecplugin-52.so
jdk1.7.0_51/jre/lib/amd64/server/
jdk1.7.0_51/jre/lib/amd64/server/libjsig.so
jdk1.7.0_51/jre/lib/amd64/server/Xusage.txt
jdk1.7.0_51/jre/lib/amd64/server/libjvm.so
jdk1.7.0_51/jre/lib/amd64/fxplugins.so
jdk1.7.0_51/jre/lib/amd64/xawt/
jdk1.7.0_51/jre/lib/amd64/xawt/libmawt.so
jdk1.7.0_51/jre/lib/amd64/fxavcodecplugin-53.so
jdk1.7.0_51/jre/lib/amd64/libjaas_unix.so
jdk1.7.0_51/jre/lib/amd64/libgstreamer-lite.so
jdk1.7.0_51/jre/lib/amd64/libmanagement.so
jdk1.7.0_51/jre/lib/amd64/jvm.cfg
jdk1.7.0_51/jre/lib/amd64/libjsdt.so
jdk1.7.0_51/jre/lib/amd64/libzip.so
jdk1.7.0_51/jre/lib/amd64/libglass.so
jdk1.7.0_51/jre/lib/amd64/headless/
jdk1.7.0_51/jre/lib/amd64/headless/libmawt.so
jdk1.7.0_51/jre/lib/amd64/libjsound.so
jdk1.7.0_51/jre/lib/amd64/libjdwp.so
jdk1.7.0_51/jre/lib/amd64/libjawt.so
jdk1.7.0_51/jre/lib/fontconfig.SuSE.10.bfc
jdk1.7.0_51/jre/lib/classlist
jdk1.7.0_51/jre/lib/management-agent.jar
jdk1.7.0_51/jre/lib/javaws.jar
jdk1.7.0_51/jre/lib/psfontj2d.properties
jdk1.7.0_51/jre/lib/rt.jar
jdk1.7.0_51/jre/lib/calendars.properties
jdk1.7.0_51/jre/lib/security/
jdk1.7.0_51/jre/lib/security/local_policy.jar
jdk1.7.0_51/jre/lib/security/trusted.libraries
jdk1.7.0_51/jre/lib/security/javafx.policy
jdk1.7.0_51/jre/lib/security/cacerts
jdk1.7.0_51/jre/lib/security/java.policy
jdk1.7.0_51/jre/lib/security/US_export_policy.jar
jdk1.7.0_51/jre/lib/security/java.security
jdk1.7.0_51/jre/lib/security/blacklist
jdk1.7.0_51/jre/lib/security/javaws.policy
jdk1.7.0_51/jre/lib/jfxrt.jar
jdk1.7.0_51/jre/lib/fontconfig.Turbo.properties.src
jdk1.7.0_51/jre/lib/jfr/
jdk1.7.0_51/jre/lib/jfr/profile.jfc
jdk1.7.0_51/jre/lib/jfr/default.jfc
jdk1.7.0_51/jre/lib/fontconfig.RedHat.5.properties.src
jdk1.7.0_51/jre/lib/net.properties
jdk1.7.0_51/jre/lib/content-types.properties
jdk1.7.0_51/jre/lib/fontconfig.RedHat.5.bfc
jdk1.7.0_51/jre/lib/alt-rt.jar
jdk1.7.0_51/jre/lib/logging.properties
jdk1.7.0_51/jre/lib/applet/
jdk1.7.0_51/jre/lib/cmm/
jdk1.7.0_51/jre/lib/cmm/LINEAR_RGB.pf
jdk1.7.0_51/jre/lib/cmm/GRAY.pf
jdk1.7.0_51/jre/lib/cmm/sRGB.pf
jdk1.7.0_51/jre/lib/cmm/PYCC.pf
jdk1.7.0_51/jre/lib/cmm/CIEXYZ.pf
jdk1.7.0_51/jre/lib/locale/
jdk1.7.0_51/jre/lib/locale/zh_TW.BIG5/
jdk1.7.0_51/jre/lib/locale/zh_TW.BIG5/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/zh_TW.BIG5/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/pt_BR/
jdk1.7.0_51/jre/lib/locale/pt_BR/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/pt_BR/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/de/
jdk1.7.0_51/jre/lib/locale/de/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/de/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/it/
jdk1.7.0_51/jre/lib/locale/it/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/it/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/zh_TW/
jdk1.7.0_51/jre/lib/locale/zh_TW/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/zh_TW/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/zh/
jdk1.7.0_51/jre/lib/locale/zh/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/zh/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/ja/
jdk1.7.0_51/jre/lib/locale/ja/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/ja/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/fr/
jdk1.7.0_51/jre/lib/locale/fr/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/fr/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/ko/
jdk1.7.0_51/jre/lib/locale/ko/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/ko/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/ko.UTF-8/
jdk1.7.0_51/jre/lib/locale/ko.UTF-8/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/ko.UTF-8/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/zh.GBK/

[jira] [Updated] (KAFKA-3432) Cluster.update() thread-safety

2016-03-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3432:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1118
[https://github.com/apache/kafka/pull/1118]

> Cluster.update() thread-safety
> --
>
> Key: KAFKA-3432
> URL: https://issues.apache.org/jira/browse/KAFKA-3432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> A `Cluster.update()` method was introduced during the development of 0.10.0 
> so that `StreamPartitionAssignor` can add internal topics on-the-fly and give 
> the augmented metadata to its underlying grouper.
> `Cluster` was supposed to be immutable after construction and all 
> synchronization happens via the `Metadata` instance. As far as I can see 
> `Cluster.update()` is not thread-safe even though `Cluster` is accessed by 
> multiple threads in some cases (I am not sure about the Streams case). Since 
> this is a public API, it is important to fix this in my opinion.
> A few options I can think of:
> * Since `PartitionAssignor` is an internal class, change 
> `PartitionAssignor.assign` to return a class containing the assignments and 
> optionally an updated cluster. This is straightforward, but I am not sure if 
> it's good enough for the Streams use-case. Can you please confirm [~guozhang]?
> * Pass `Metadata` instead of `Cluster` to `PartitionAssignor.assign`, giving 
> assignors the ability to update the metadata as needed.
> * Make `Cluster` thread-safe in the face of mutations (without relying on 
> synchronization at the `Metadata` level). This is not ideal, KAFKA-3428 shows 
> that the synchronization at `Metadata` level is already too costly for high 
> concurrency situations.
> Thoughts [~guozhang], [~hachikuji]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3432) Cluster.update() thread-safety

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209148#comment-15209148
 ] 

ASF GitHub Bot commented on KAFKA-3432:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1118


> Cluster.update() thread-safety
> --
>
> Key: KAFKA-3432
> URL: https://issues.apache.org/jira/browse/KAFKA-3432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> A `Cluster.update()` method was introduced during the development of 0.10.0 
> so that `StreamPartitionAssignor` can add internal topics on-the-fly and give 
> the augmented metadata to its underlying grouper.
> `Cluster` was supposed to be immutable after construction and all 
> synchronization happens via the `Metadata` instance. As far as I can see 
> `Cluster.update()` is not thread-safe even though `Cluster` is accessed by 
> multiple threads in some cases (I am not sure about the Streams case). Since 
> this is a public API, it is important to fix this in my opinion.
> A few options I can think of:
> * Since `PartitionAssignor` is an internal class, change 
> `PartitionAssignor.assign` to return a class containing the assignments and 
> optionally an updated cluster. This is straightforward, but I am not sure if 
> it's good enough for the Streams use-case. Can you please confirm [~guozhang]?
> * Pass `Metadata` instead of `Cluster` to `PartitionAssignor.assign`, giving 
> assignors the ability to update the metadata as needed.
> * Make `Cluster` thread-safe in the face of mutations (without relying on 
> synchronization at the `Metadata` level). This is not ideal, KAFKA-3428 shows 
> that the synchronization at `Metadata` level is already too costly for high 
> concurrency situations.
> Thoughts [~guozhang], [~hachikuji]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3432; Cluster.update() thread-safety

2016-03-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1118


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3455) Connect custom processors with the streams DSL

2016-03-23 Thread Jonathan Bender (JIRA)
Jonathan Bender created KAFKA-3455:
--

 Summary: Connect custom processors with the streams DSL
 Key: KAFKA-3455
 URL: https://issues.apache.org/jira/browse/KAFKA-3455
 Project: Kafka
  Issue Type: Sub-task
  Components: kafka streams
Affects Versions: 0.9.0.1
Reporter: Jonathan Bender


>From the kafka users email thread, we discussed the idea of connecting custom 
>processors with topologies defined from the Streams DSL (and being able to 
>sink data from the processor).  Possibly this could involve exposing the 
>underlying processor's name in the streams DSL so it can be connected with the 
>standard processor API.
{quote}
Thanks for the feedback. This is definitely something we wanted to support
in the Streams DSL.

One tricky thing, though, is that some operations do not translate to a
single processor, but a sub-graph of processors (think of a stream-stream
join, which is translated to actually 5 processors for windowing / state
queries / merging, each with a different internal name). So how to define
the API to return the processor name needs some more thinking.
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3454) Add Kafka Streams section in documentation

2016-03-23 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3454:


 Summary: Add Kafka Streams section in documentation
 Key: KAFKA-3454
 URL: https://issues.apache.org/jira/browse/KAFKA-3454
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Fix For: 0.10.0.0


We need to add the Kafka Streams section in documents since the 0.10.0.0 
release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-23 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3441:

Fix Version/s: 0.10.1.0

> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANN] Small change to release process

2016-03-23 Thread Grant Henke
Works for me. Thanks for the quick response!

On Wed, Mar 23, 2016 at 3:09 PM, Gwen Shapira  wrote:

> It was decided (rather unilaterally by your humble release manager), but
> that doesn't mean decisions can't change :)
>
> 1. We can definitely have multiple tags for each separate RC. Actually I
> think this is better than moving the tag.
> 2. We can't change release artifacts after the vote, which mean that the
> artifacts we vote on must be 0.10.0.0. We can change the version in the
> branch itself from SNAPSHOT to RC (and still release 0.10.0.0 as
> candidates), but since SNAPSHOT has well defined semantics, I think it
> makes sense to keep it.
>
> Gwen
>
>
>
> On Wed, Mar 23, 2016 at 1:01 PM, Grant Henke  wrote:
>
> > Hi Gwen,
> >
> > Is the new release process already decided? If so you can ignore my
> > question below.
> >
> > Since we are making a change to the process. What do you think about
> > publishing release candidates using "RC" in the version? So instead of
> > re-using the 0.10.0.0 tag and using 0.10.0.0-SNAPSHOT as the version
> until
> > release. Instead each new release candidate changes the version and tag
> to
> > be 0.10.0.0-RC1, 0.10.0.0-RC2, etc.
> >
> > Thanks,
> > Grant
> >
> >
> >
> > On Wed, Mar 23, 2016 at 2:06 PM, Gwen Shapira  wrote:
> >
> > > Hey Team Kafka,
> > >
> > > As you know, our current release process is:
> > > * Branch
> > > * Change version from 0.10.0.0-SNAPSHOT to 0.10.0.0
> > > * Roll out a release candidate with version 0.10.0.0
> > > * Fix bugs and keep rolling out release candidates
> > > * After vote passed and release was published, bump the branch to
> > > 0.10.0.1-SNAPSHOT
> > >
> > > As a result, we have multiple artifacts with same version but different
> > > content. Which is a huge no-no, and can cause technical issues with
> Maven
> > > repositories (clients may miss updates because they already have
> earlier
> > > releases cached and releases should be immutable.
> > >
> > > To streamline our process a bit, we are going to move to the following
> > > process:
> > >
> > > * Branch
> > > * Change the version on trunk to 0.10.1.0-SNAPSHOT but keep the branch
> > > version as 0.10.0.0 SNAPSHOT
> > > * Before every release candidate, we'll push a commit that changes
> > version
> > > to 0.10.0.0 to a tag and release from the tag. If needed, commits will
> go
> > > to the release branch with the SNAPSHOT version.
> > > * Once vote passes, we'll publish the release, merge the tag commit to
> > the
> > > branch, and bump the branch to  0.10.0.1-SNAPSHOT
> > >
> > > This is very similar to the process that Zookeeper community is
> following
> > > and will help anyone who builds against branches.
> > >
> > > As an awkward first step, I'm moving the 0.10.0.0 branch back to
> > > 0.10.0.0-SNAPSHOT.
> > >
> > > Sorry for the awkwardness, but this will improve things going forward.
> > >
> > > Gwen
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: [ANN] Small change to release process

2016-03-23 Thread Gwen Shapira
It was decided (rather unilaterally by your humble release manager), but
that doesn't mean decisions can't change :)

1. We can definitely have multiple tags for each separate RC. Actually I
think this is better than moving the tag.
2. We can't change release artifacts after the vote, which mean that the
artifacts we vote on must be 0.10.0.0. We can change the version in the
branch itself from SNAPSHOT to RC (and still release 0.10.0.0 as
candidates), but since SNAPSHOT has well defined semantics, I think it
makes sense to keep it.

Gwen



On Wed, Mar 23, 2016 at 1:01 PM, Grant Henke  wrote:

> Hi Gwen,
>
> Is the new release process already decided? If so you can ignore my
> question below.
>
> Since we are making a change to the process. What do you think about
> publishing release candidates using "RC" in the version? So instead of
> re-using the 0.10.0.0 tag and using 0.10.0.0-SNAPSHOT as the version until
> release. Instead each new release candidate changes the version and tag to
> be 0.10.0.0-RC1, 0.10.0.0-RC2, etc.
>
> Thanks,
> Grant
>
>
>
> On Wed, Mar 23, 2016 at 2:06 PM, Gwen Shapira  wrote:
>
> > Hey Team Kafka,
> >
> > As you know, our current release process is:
> > * Branch
> > * Change version from 0.10.0.0-SNAPSHOT to 0.10.0.0
> > * Roll out a release candidate with version 0.10.0.0
> > * Fix bugs and keep rolling out release candidates
> > * After vote passed and release was published, bump the branch to
> > 0.10.0.1-SNAPSHOT
> >
> > As a result, we have multiple artifacts with same version but different
> > content. Which is a huge no-no, and can cause technical issues with Maven
> > repositories (clients may miss updates because they already have earlier
> > releases cached and releases should be immutable.
> >
> > To streamline our process a bit, we are going to move to the following
> > process:
> >
> > * Branch
> > * Change the version on trunk to 0.10.1.0-SNAPSHOT but keep the branch
> > version as 0.10.0.0 SNAPSHOT
> > * Before every release candidate, we'll push a commit that changes
> version
> > to 0.10.0.0 to a tag and release from the tag. If needed, commits will go
> > to the release branch with the SNAPSHOT version.
> > * Once vote passes, we'll publish the release, merge the tag commit to
> the
> > branch, and bump the branch to  0.10.0.1-SNAPSHOT
> >
> > This is very similar to the process that Zookeeper community is following
> > and will help anyone who builds against branches.
> >
> > As an awkward first step, I'm moving the 0.10.0.0 branch back to
> > 0.10.0.0-SNAPSHOT.
> >
> > Sorry for the awkwardness, but this will improve things going forward.
> >
> > Gwen
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


[jira] [Resolved] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-23 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-3441.

Resolution: Fixed

> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3453) Transient test failures due to MiniKDC port allocation strategy

2016-03-23 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-3453:


 Summary: Transient test failures due to MiniKDC port allocation 
strategy
 Key: KAFKA-3453
 URL: https://issues.apache.org/jira/browse/KAFKA-3453
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
Reporter: Ewen Cheslack-Postava


A number of tests, especially our consumer tests, fail transiently because 
MiniKDC allocates ports by creating a socket, getting its port, then closing 
it. As previously addressed in our own code, this causes problems because that 
port can be reallocated before the process has a chance to bind a new socket -- 
whether due to another test running in parallel or another process simply 
binding the port first. This results in errors like this in the tests:

{quote}
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:547)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$400(AbstractPollingIoAcceptor.java:68)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:422)
at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{quote}

This is an ongoing issue that Confluent sees in its Jenkins builds, which is 
the reason for this ticket. The real issue is actually in MiniKDC (we pass in 
"0" for the port, but then it uses this other port allocation strategy), but we 
either need to a) figure out a workaround or b) get a fix in upstream and then 
update to a newer MiniKDC version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANN] Small change to release process

2016-03-23 Thread Grant Henke
Hi Gwen,

Is the new release process already decided? If so you can ignore my
question below.

Since we are making a change to the process. What do you think about
publishing release candidates using "RC" in the version? So instead of
re-using the 0.10.0.0 tag and using 0.10.0.0-SNAPSHOT as the version until
release. Instead each new release candidate changes the version and tag to
be 0.10.0.0-RC1, 0.10.0.0-RC2, etc.

Thanks,
Grant



On Wed, Mar 23, 2016 at 2:06 PM, Gwen Shapira  wrote:

> Hey Team Kafka,
>
> As you know, our current release process is:
> * Branch
> * Change version from 0.10.0.0-SNAPSHOT to 0.10.0.0
> * Roll out a release candidate with version 0.10.0.0
> * Fix bugs and keep rolling out release candidates
> * After vote passed and release was published, bump the branch to
> 0.10.0.1-SNAPSHOT
>
> As a result, we have multiple artifacts with same version but different
> content. Which is a huge no-no, and can cause technical issues with Maven
> repositories (clients may miss updates because they already have earlier
> releases cached and releases should be immutable.
>
> To streamline our process a bit, we are going to move to the following
> process:
>
> * Branch
> * Change the version on trunk to 0.10.1.0-SNAPSHOT but keep the branch
> version as 0.10.0.0 SNAPSHOT
> * Before every release candidate, we'll push a commit that changes version
> to 0.10.0.0 to a tag and release from the tag. If needed, commits will go
> to the release branch with the SNAPSHOT version.
> * Once vote passes, we'll publish the release, merge the tag commit to the
> branch, and bump the branch to  0.10.0.1-SNAPSHOT
>
> This is very similar to the process that Zookeeper community is following
> and will help anyone who builds against branches.
>
> As an awkward first step, I'm moving the 0.10.0.0 branch back to
> 0.10.0.0-SNAPSHOT.
>
> Sorry for the awkwardness, but this will improve things going forward.
>
> Gwen
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[jira] [Commented] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209037#comment-15209037
 ] 

ASF GitHub Bot commented on KAFKA-3441:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1122


> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3441: 0.10.0 documentation still says "0...

2016-03-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1122


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209028#comment-15209028
 ] 

ASF GitHub Bot commented on KAFKA-3434:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1123

KAFKA-3434: add old constructor to ConsumerRecord



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3434

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1123.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1123


commit 8b172ca9d0c41a419a5a6ddccb606765fe2e3bb3
Author: Jason Gustafson 
Date:   2016-03-23T19:50:19Z

KAFKA-3434: add old constructor to ConsumerRecord




> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3434: add old constructor to ConsumerRec...

2016-03-23 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1123

KAFKA-3434: add old constructor to ConsumerRecord



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3434

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1123.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1123


commit 8b172ca9d0c41a419a5a6ddccb606765fe2e3bb3
Author: Jason Gustafson 
Date:   2016-03-23T19:50:19Z

KAFKA-3434: add old constructor to ConsumerRecord




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[ANN] Small change to release process

2016-03-23 Thread Gwen Shapira
Hey Team Kafka,

As you know, our current release process is:
* Branch
* Change version from 0.10.0.0-SNAPSHOT to 0.10.0.0
* Roll out a release candidate with version 0.10.0.0
* Fix bugs and keep rolling out release candidates
* After vote passed and release was published, bump the branch to
0.10.0.1-SNAPSHOT

As a result, we have multiple artifacts with same version but different
content. Which is a huge no-no, and can cause technical issues with Maven
repositories (clients may miss updates because they already have earlier
releases cached and releases should be immutable.

To streamline our process a bit, we are going to move to the following
process:

* Branch
* Change the version on trunk to 0.10.1.0-SNAPSHOT but keep the branch
version as 0.10.0.0 SNAPSHOT
* Before every release candidate, we'll push a commit that changes version
to 0.10.0.0 to a tag and release from the tag. If needed, commits will go
to the release branch with the SNAPSHOT version.
* Once vote passes, we'll publish the release, merge the tag commit to the
branch, and bump the branch to  0.10.0.1-SNAPSHOT

This is very similar to the process that Zookeeper community is following
and will help anyone who builds against branches.

As an awkward first step, I'm moving the 0.10.0.0 branch back to
0.10.0.0-SNAPSHOT.

Sorry for the awkwardness, but this will improve things going forward.

Gwen


RE: [VOTE] KIP-51 - List Connectors REST API

2016-03-23 Thread Andrew Schofield
+1 (non-binding)

+1 for re-evaluating the KIP process. This one went through very quickly, but 
they can drag on sometimes.


> Date: Wed, 23 Mar 2016 11:31:37 -0700
> Subject: Re: [VOTE] KIP-51 - List Connectors REST API
> From: wangg...@gmail.com
> To: dev@kafka.apache.org
>
> +1.
>
> On Wed, Mar 23, 2016 at 10:24 AM, Ashish Singh  wrote:
>
>> +1 (non-binding)
>>
>> On Wed, Mar 23, 2016 at 10:00 AM, Gwen Shapira  wrote:
>>
>>> Very large +1 on re-evaluating the KIP process.
>>>
>>> I was hoping we can do a meta-kip meeting after the release (Maybe even
>>> in-person at Kafka Summit?) to discuss.
>>>
>>> On Tue, Mar 22, 2016 at 7:59 PM, Grant Henke 
>> wrote:
>>>
 +1 (non-binding)

 I am also a +1 to evaluating the KIP process and ways to make it more
 effective and streamlined.

 On Tue, Mar 22, 2016 at 6:04 PM, Neha Narkhede 
>>> wrote:

> +1 (binding)
>
> On Tue, Mar 22, 2016 at 3:56 PM, Liquan Pei 
>>> wrote:
>
>> +1
>>
>> On Tue, Mar 22, 2016 at 3:54 PM, Gwen Shapira 
 wrote:
>>
>>> +1
>>>
>>> Straight forward enough and can't possibly break anything.
>>>
>>> On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava <
>> e...@confluent.io>
>>> wrote:
>>>
 Since it's pretty minimal, we'd like to squeeze it into 0.10 if
>> possible,
 and VOTE threads take 3 days, it was suggested it might make
>>> sense
 to
>>> just
 kick off voting on this KIP immediately (and restart it if
>>> someone
>> raises
 an issue). Feel free to object and comment in the DISCUSS
>> thread
>>> if
> you
 feel there's something to still be discussed.



>>>
>>
>

>>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API

 I'll obviously kick things off with a +1.

 -Ewen

>>>
>>
>>
>>
>> --
>> Liquan Pei
>> Department of Physics
>> University of Massachusetts Amherst
>>
>
>
>
> --
> Thanks,
> Neha
>



 --
 Grant Henke
 Software Engineer | Cloudera
 gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke

>>>
>>
>>
>>
>> --
>>
>> Regards,
>> Ashish
>>
>
>
>
> --
> -- Guozhang
  

Re: [VOTE] KIP-51 - List Connectors REST API

2016-03-23 Thread Guozhang Wang
+1.

On Wed, Mar 23, 2016 at 10:24 AM, Ashish Singh  wrote:

> +1 (non-binding)
>
> On Wed, Mar 23, 2016 at 10:00 AM, Gwen Shapira  wrote:
>
> > Very large +1 on re-evaluating the KIP process.
> >
> > I was hoping we can do a meta-kip meeting after the release (Maybe even
> > in-person at Kafka Summit?) to discuss.
> >
> > On Tue, Mar 22, 2016 at 7:59 PM, Grant Henke 
> wrote:
> >
> > > +1 (non-binding)
> > >
> > > I am also a +1 to evaluating the KIP process and ways to make it more
> > > effective and streamlined.
> > >
> > > On Tue, Mar 22, 2016 at 6:04 PM, Neha Narkhede 
> > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > On Tue, Mar 22, 2016 at 3:56 PM, Liquan Pei 
> > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Tue, Mar 22, 2016 at 3:54 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > Straight forward enough and can't possibly break anything.
> > > > > >
> > > > > > On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava <
> > > > > e...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > > > Since it's pretty minimal, we'd like to squeeze it into 0.10 if
> > > > > possible,
> > > > > > > and VOTE threads take 3 days, it was suggested it might make
> > sense
> > > to
> > > > > > just
> > > > > > > kick off voting on this KIP immediately (and restart it if
> > someone
> > > > > raises
> > > > > > > an issue). Feel free to object and comment in the DISCUSS
> thread
> > if
> > > > you
> > > > > > > feel there's something to still be discussed.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
> > > > > > >
> > > > > > > I'll obviously kick things off with a +1.
> > > > > > >
> > > > > > > -Ewen
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Liquan Pei
> > > > > Department of Physics
> > > > > University of Massachusetts Amherst
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks,
> > > > Neha
> > > >
> > >
> > >
> > >
> > > --
> > > Grant Henke
> > > Software Engineer | Cloudera
> > > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> > >
> >
>
>
>
> --
>
> Regards,
> Ashish
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk7 #1143

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3409: handle CommitFailedException in MirrorMaker

--
[...truncated 1555 lines...]

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED


[jira] [Created] (KAFKA-3452) Support session windows besides time interval windows

2016-03-23 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3452:


 Summary: Support session windows besides time interval windows
 Key: KAFKA-3452
 URL: https://issues.apache.org/jira/browse/KAFKA-3452
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
 Fix For: 0.10.1.0


The Streams DSL currently does not provide session window as in the DataFlow 
model. We have seen some common use cases for this feature and it's better 
adding this support asap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208912#comment-15208912
 ] 

ASF GitHub Bot commented on KAFKA-3441:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1122

KAFKA-3441: 0.10.0 documentation still says "0.9.0"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka docs-10

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1122.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1122


commit d10054d14ad90f13e803d865014c1c5b584ba34c
Author: Grant Henke 
Date:   2016-03-23T18:16:43Z

KAFKA-3441: 0.10.0 documentation still says "0.9.0"




> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #474

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3409: handle CommitFailedException in MirrorMaker

--
[...truncated 3153 lines...]
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 

[GitHub] kafka pull request: KAFKA-3441: 0.10.0 documentation still says "0...

2016-03-23 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1122

KAFKA-3441: 0.10.0 documentation still says "0.9.0"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka docs-10

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1122.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1122


commit d10054d14ad90f13e803d865014c1c5b584ba34c
Author: Grant Henke 
Date:   2016-03-23T18:16:43Z

KAFKA-3441: 0.10.0 documentation still says "0.9.0"




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10.0-jdk7 #9

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3409: handle CommitFailedException in MirrorMaker

--
[...truncated 1573 lines...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 

Re: [VOTE] KIP-51 - List Connectors REST API

2016-03-23 Thread Ashish Singh
+1 (non-binding)

On Wed, Mar 23, 2016 at 10:00 AM, Gwen Shapira  wrote:

> Very large +1 on re-evaluating the KIP process.
>
> I was hoping we can do a meta-kip meeting after the release (Maybe even
> in-person at Kafka Summit?) to discuss.
>
> On Tue, Mar 22, 2016 at 7:59 PM, Grant Henke  wrote:
>
> > +1 (non-binding)
> >
> > I am also a +1 to evaluating the KIP process and ways to make it more
> > effective and streamlined.
> >
> > On Tue, Mar 22, 2016 at 6:04 PM, Neha Narkhede 
> wrote:
> >
> > > +1 (binding)
> > >
> > > On Tue, Mar 22, 2016 at 3:56 PM, Liquan Pei 
> wrote:
> > >
> > > > +1
> > > >
> > > > On Tue, Mar 22, 2016 at 3:54 PM, Gwen Shapira 
> > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > Straight forward enough and can't possibly break anything.
> > > > >
> > > > > On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava <
> > > > e...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Since it's pretty minimal, we'd like to squeeze it into 0.10 if
> > > > possible,
> > > > > > and VOTE threads take 3 days, it was suggested it might make
> sense
> > to
> > > > > just
> > > > > > kick off voting on this KIP immediately (and restart it if
> someone
> > > > raises
> > > > > > an issue). Feel free to object and comment in the DISCUSS thread
> if
> > > you
> > > > > > feel there's something to still be discussed.
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
> > > > > >
> > > > > > I'll obviously kick things off with a +1.
> > > > > >
> > > > > > -Ewen
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Liquan Pei
> > > > Department of Physics
> > > > University of Massachusetts Amherst
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Neha
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>



-- 

Regards,
Ashish


[jira] [Updated] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3451:
---
Attachment: Jacoco-html.zip
scoverage.zip

> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
> Attachments: Jacoco-html.zip, scoverage.zip
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3451:
---
Status: Patch Available  (was: Open)

> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
> Attachments: Jacoco-html.zip, scoverage.zip
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208786#comment-15208786
 ] 

Grant Henke commented on KAFKA-3451:


Attached sample report output.

> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
> Attachments: Jacoco-html.zip, scoverage.zip
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208780#comment-15208780
 ] 

ASF GitHub Bot commented on KAFKA-3451:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1121

KAFKA-3451: Add basic HTML coverage report generation to gradle



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka coverage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1121


commit 3ccf9fc21e28b8b58915d644659c3eb04123bc85
Author: Grant Henke 
Date:   2016-03-23T17:03:37Z

KAFKA-3451: Add basic HTML coverage report generation to gradle




> Add basic HTML coverage report generation to gradle
> ---
>
> Key: KAFKA-3451
> URL: https://issues.apache.org/jira/browse/KAFKA-3451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> Having some basic ability to report and view coverage is valuable and a good 
> start. This may not be perfect and enhancements should be tracked under the 
> KAFKA-1722 umbrella, but its a start. 
> This will use Jacoco to report on the java projects and Scoverage to report 
> on the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3451: Add basic HTML coverage report gen...

2016-03-23 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1121

KAFKA-3451: Add basic HTML coverage report generation to gradle



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka coverage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1121


commit 3ccf9fc21e28b8b58915d644659c3eb04123bc85
Author: Grant Henke 
Date:   2016-03-23T17:03:37Z

KAFKA-3451: Add basic HTML coverage report generation to gradle




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3451) Add basic HTML coverage report generation to gradle

2016-03-23 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3451:
--

 Summary: Add basic HTML coverage report generation to gradle
 Key: KAFKA-3451
 URL: https://issues.apache.org/jira/browse/KAFKA-3451
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke


Having some basic ability to report and view coverage is valuable and a good 
start. This may not be perfect and enhancements should be tracked under the 
KAFKA-1722 umbrella, but its a start. 

This will use Jacoco to report on the java projects and Scoverage to report on 
the Scala projects (core). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-51 - List Connectors REST API

2016-03-23 Thread Gwen Shapira
Very large +1 on re-evaluating the KIP process.

I was hoping we can do a meta-kip meeting after the release (Maybe even
in-person at Kafka Summit?) to discuss.

On Tue, Mar 22, 2016 at 7:59 PM, Grant Henke  wrote:

> +1 (non-binding)
>
> I am also a +1 to evaluating the KIP process and ways to make it more
> effective and streamlined.
>
> On Tue, Mar 22, 2016 at 6:04 PM, Neha Narkhede  wrote:
>
> > +1 (binding)
> >
> > On Tue, Mar 22, 2016 at 3:56 PM, Liquan Pei  wrote:
> >
> > > +1
> > >
> > > On Tue, Mar 22, 2016 at 3:54 PM, Gwen Shapira 
> wrote:
> > >
> > > > +1
> > > >
> > > > Straight forward enough and can't possibly break anything.
> > > >
> > > > On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava <
> > > e...@confluent.io>
> > > > wrote:
> > > >
> > > > > Since it's pretty minimal, we'd like to squeeze it into 0.10 if
> > > possible,
> > > > > and VOTE threads take 3 days, it was suggested it might make sense
> to
> > > > just
> > > > > kick off voting on this KIP immediately (and restart it if someone
> > > raises
> > > > > an issue). Feel free to object and comment in the DISCUSS thread if
> > you
> > > > > feel there's something to still be discussed.
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
> > > > >
> > > > > I'll obviously kick things off with a +1.
> > > > >
> > > > > -Ewen
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Liquan Pei
> > > Department of Physics
> > > University of Massachusetts Amherst
> > >
> >
> >
> >
> > --
> > Thanks,
> > Neha
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


[jira] [Commented] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2016-03-23 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208764#comment-15208764
 ] 

Jiangjie Qin commented on KAFKA-3003:
-

In this particular case, the problem was due to high watermark was not updated 
correctly. Because high water mark is only used for consumer fetcher, it should 
not affect replica fetchers.

> The fetch.wait.max.ms is not honored when new log segment rolled for low 
> volume topics.
> ---
>
> Key: KAFKA-3003
> URL: https://issues.apache.org/jira/browse/KAFKA-3003
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The problem we saw can be explained by the example below:
> 1. Message offset 100 is appended to partition p0, log segment .log. 
> at time T. After that no message is appended. 
> 2. This message is replicated, leader replica update its 
> highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
> 3. At time T + retention.ms, because no message has been appended to current 
> active log segment for retention.ms, the last modified time of the current 
> log segment reaches retention time. 
> 4. Broker rolls out a new log segment 0001.log, and deletes the old log 
> segment .log. The new log segment in this case is empty because there 
> is no message appended. 
> 5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the 
> new log segment's base offset, but nextOffsetMetadata.messageOffset does not 
> change. so nextOffsetMetadata.messageOffset=1, 
> nextOffsetMetadata.segmentBaseOffset=1.
> 6. Now a FetchRequest comes and try to fetch from offset 1, 
> fetch.wait.max.ms=1000.
> 7. In ReplicaManager, because there is no data to return, the fetch request 
> will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
> is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
> the fetchOffset returned by log.read(), it will see the 
> replica.highWatermark.segmentBaseOffset=0 and 
> fetchOffset.segmentBaseOffset=1. So it will assume the fetch occurs on a 
> later segment and complete the delayed fetch immediately.
> In this case, the replica.highWatermark was not updated because the 
> LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
> segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
> messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
> get updated after the follower fetches from the leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3409.
--
Resolution: Fixed

Issue resolved by pull request 1115
[https://github.com/apache/kafka/pull/1115]

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208751#comment-15208751
 ] 

Gwen Shapira commented on KAFKA-3442:
-

Perfect! Thank you [~dana.powers] for testing our release candidate and 
catching this issue.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208748#comment-15208748
 ] 

ASF GitHub Bot commented on KAFKA-3409:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1115


> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3409: handle CommitFailedException in Mi...

2016-03-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1115


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3135) Unexpected delay before fetch response transmission

2016-03-23 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208740#comment-15208740
 ] 

Jason Gustafson commented on KAFKA-3135:


[~rangadi] [~kciesielski] Thanks for letting me know. I tried to reproduce 
previously on Linux, but wasn't able to. Going to bump this to critical and see 
if I can isolate the problem.

> Unexpected delay before fetch response transmission
> ---
>
> Key: KAFKA-3135
> URL: https://issues.apache.org/jira/browse/KAFKA-3135
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> From the user list, Krzysztof Ciesielski reports the following:
> {quote}
> Scenario description:
> First, a producer writes 50 elements into a topic
> Then, a consumer starts to read, polling in a loop.
> When "max.partition.fetch.bytes" is set to a relatively small value, each
> "consumer.poll()" returns a batch of messages.
> If this value is left as default, the output tends to look like this:
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> As we can see, there are weird "gaps" when poll returns 0 elements for some
> time. What is the reason for that? Maybe there are some good practices
> about setting "max.partition.fetch.bytes" which I don't follow?
> {quote}
> The gist to reproduce this problem is here: 
> https://gist.github.com/kciesielski/054bb4359a318aa17561.
> After some initial investigation, the delay appears to be in the server's 
> networking layer. Basically I see a delay of 5 seconds from the time that 
> Selector.send() is invoked in SocketServer.Processor with the fetch response 
> to the time that the send is completed. Using netstat in the middle of the 
> delay shows the following output:
> {code}
> tcp4   0  0  10.191.0.30.55455  10.191.0.30.9092   ESTABLISHED
> tcp4   0 102400  10.191.0.30.9092   10.191.0.30.55454  ESTABLISHED
> {code}
> From this, it looks like the data reaches the send buffer, but needs to be 
> flushed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3135) Unexpected delay before fetch response transmission

2016-03-23 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3135:
---
Priority: Critical  (was: Major)

> Unexpected delay before fetch response transmission
> ---
>
> Key: KAFKA-3135
> URL: https://issues.apache.org/jira/browse/KAFKA-3135
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> From the user list, Krzysztof Ciesielski reports the following:
> {quote}
> Scenario description:
> First, a producer writes 50 elements into a topic
> Then, a consumer starts to read, polling in a loop.
> When "max.partition.fetch.bytes" is set to a relatively small value, each
> "consumer.poll()" returns a batch of messages.
> If this value is left as default, the output tends to look like this:
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> As we can see, there are weird "gaps" when poll returns 0 elements for some
> time. What is the reason for that? Maybe there are some good practices
> about setting "max.partition.fetch.bytes" which I don't follow?
> {quote}
> The gist to reproduce this problem is here: 
> https://gist.github.com/kciesielski/054bb4359a318aa17561.
> After some initial investigation, the delay appears to be in the server's 
> networking layer. Basically I see a delay of 5 seconds from the time that 
> Selector.send() is invoked in SocketServer.Processor with the fetch response 
> to the time that the send is completed. Using netstat in the middle of the 
> delay shows the following output:
> {code}
> tcp4   0  0  10.191.0.30.55455  10.191.0.30.9092   ESTABLISHED
> tcp4   0 102400  10.191.0.30.9092   10.191.0.30.55454  ESTABLISHED
> {code}
> From this, it looks like the data reaches the send buffer, but needs to be 
> flushed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #8

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3442; Fix FileMessageSet iterator.

--
[...truncated 3133 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 

[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-23 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208721#comment-15208721
 ] 

Dana Powers commented on KAFKA-3442:


+1. verified test passes on trunk commit `7af67ce` 

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-960) Upgrade Metrics to 3.x

2016-03-23 Thread Xavier Stevens (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208671#comment-15208671
 ] 

Xavier Stevens commented on KAFKA-960:
--

We would be interested in having this still. Latest dropwizard metrics lib 
right now is 3.1.2. We might also be interested in contributing the patch for 
it, but we haven't gone through the contributor process yet.

> Upgrade Metrics to 3.x
> --
>
> Key: KAFKA-960
> URL: https://issues.apache.org/jira/browse/KAFKA-960
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Cosmin Lehene
>
> Now that metrics 3.0 has been released 
> (http://metrics.codahale.com/about/release-notes/) we can upgrade back



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3400) Topic stop working / can't describe topic

2016-03-23 Thread Tobias (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208642#comment-15208642
 ] 

Tobias commented on KAFKA-3400:
---

I tried building from trunk (0.10.1.0) and recreated the cluster but still have 
the same issue where a number of topics don't get assigned partitions.

> Topic stop working / can't describe topic
> -
>
> Key: KAFKA-3400
> URL: https://issues.apache.org/jira/browse/KAFKA-3400
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Tobias
>Assignee: Ashish K Singh
> Fix For: 0.10.1.0
>
>
> we are seeing an issue were we intermittently (every couple of hours) get and 
> error with certain topics. They stop working and producers give a 
> LeaderNotFoundException.
> When we then try to use kafka-topics.sh to describe the topic we get the 
> error below.
> Error while executing topic command : next on empty iterator
> {{
> [2016-03-15 17:30:26,231] ERROR java.util.NoSuchElementException: next on 
> empty iterator
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)
>   at scala.collection.IterableLike$class.head(IterableLike.scala:91)
>   at scala.collection.AbstractIterable.head(Iterable.scala:54)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:198)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:188)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:188)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:66)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> }}
> if we delete the topic, then it will start to work again for a while
> We can't see anything obvious in the logs but are happy to provide if needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1142

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3442; Fix FileMessageSet iterator.

--
[...truncated 1591 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #473

2016-03-23 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3442; Fix FileMessageSet iterator.

--
[...truncated 1605 lines...]

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 

[jira] [Comment Edited] (KAFKA-3400) Topic stop working / can't describe topic

2016-03-23 Thread Tobias (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208542#comment-15208542
 ] 

Tobias edited comment on KAFKA-3400 at 3/23/16 3:11 PM:


I also checked the data directories and the only topic to have any files 
created is the EXCHANGE_RATE_EXPORT one.
No files for any other topics (5 all in all)


But if I run
bin/kafka-topics.sh --zookeeper  --create --topic TEST_TOPIC 
--partitions 8 --replication-factor 3

The TOPIC gets created perfectly and files exist under data directories


was (Author: tobad357):
I also checked the data directories and the only topic to have any files 
created is the EXCHANGE_RATE_EXPORT one.
No files for any other topics (5 all in all)

> Topic stop working / can't describe topic
> -
>
> Key: KAFKA-3400
> URL: https://issues.apache.org/jira/browse/KAFKA-3400
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Tobias
>Assignee: Ashish K Singh
> Fix For: 0.10.1.0
>
>
> we are seeing an issue were we intermittently (every couple of hours) get and 
> error with certain topics. They stop working and producers give a 
> LeaderNotFoundException.
> When we then try to use kafka-topics.sh to describe the topic we get the 
> error below.
> Error while executing topic command : next on empty iterator
> {{
> [2016-03-15 17:30:26,231] ERROR java.util.NoSuchElementException: next on 
> empty iterator
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)
>   at scala.collection.IterableLike$class.head(IterableLike.scala:91)
>   at scala.collection.AbstractIterable.head(Iterable.scala:54)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:198)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:188)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:188)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:66)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> }}
> if we delete the topic, then it will start to work again for a while
> We can't see anything obvious in the logs but are happy to provide if needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3400) Topic stop working / can't describe topic

2016-03-23 Thread Tobias (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208542#comment-15208542
 ] 

Tobias commented on KAFKA-3400:
---

I also checked the data directories and the only topic to have any files 
created is the EXCHANGE_RATE_EXPORT one.
No files for any other topics (5 all in all)

> Topic stop working / can't describe topic
> -
>
> Key: KAFKA-3400
> URL: https://issues.apache.org/jira/browse/KAFKA-3400
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Tobias
>Assignee: Ashish K Singh
> Fix For: 0.10.1.0
>
>
> we are seeing an issue were we intermittently (every couple of hours) get and 
> error with certain topics. They stop working and producers give a 
> LeaderNotFoundException.
> When we then try to use kafka-topics.sh to describe the topic we get the 
> error below.
> Error while executing topic command : next on empty iterator
> {{
> [2016-03-15 17:30:26,231] ERROR java.util.NoSuchElementException: next on 
> empty iterator
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)
>   at scala.collection.IterableLike$class.head(IterableLike.scala:91)
>   at scala.collection.AbstractIterable.head(Iterable.scala:54)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:198)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:188)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:188)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:66)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> }}
> if we delete the topic, then it will start to work again for a while
> We can't see anything obvious in the logs but are happy to provide if needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3400) Topic stop working / can't describe topic

2016-03-23 Thread Tobias (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208531#comment-15208531
 ] 

Tobias commented on KAFKA-3400:
---

More research.
I've now setup a completely new cluster using auto.create topic.
The producers and consumers are version 0.8.2.1 and the brokers are 0.9.0.1
When the topics get auto created most of the topics don't get assigned 
partitions and after the initial message I see nothign else re these topics in 
the logs
Example (notice the empty Map for replica assignment)
{noformat}
INFO [TopicChangeListener on Controller 3]: New topics: [Set(AUDIT_EXPORT)], 
deleted topics: [Set()], new partition replica assignment [Map()] 
(kafka.controller.PartitionStateMachine$TopicChangeListener)
{noformat}

While the one Topic where it worked looks like this
{noformat}
[2016-03-23 14:43:39,740] INFO [TopicChangeListener on Controller 3]: New 
topics: [Set(EXCHANGE_RATE_EXPORT)], deleted topics: [Set()], new partition 
replica assignment [Map([EXCHANGE_RATE_EXPORT,1] -> List(3, 1, 2), 
[EXCHANGE_RATE_EXPORT,6] -> List(2, 3, 1), [EXCHANGE_RATE_EXPORT,0] -> List(2, 
3, 1), [EXCHANGE_RATE_EXPORT,5] -> List(1, 3, 2), [EXCHANGE_RATE_EXPORT,4] -> 
List(3, 2, 1), [EXCHANGE_RATE_EXPORT,7] -> List(3, 1, 2), 
[EXCHANGE_RATE_EXPORT,2] -> List(1, 2, 3), [EXCHANGE_RATE_EXPORT,3] -> List(2, 
1, 3))] (kafka.controller.PartitionStateMachine$TopicChangeListener)
[2016-03-23 14:43:39,741] INFO [Controller 3]: New topic creation callback for 
[EXCHANGE_RATE_EXPORT,1],[EXCHANGE_RATE_EXPORT,2],[EXCHANGE_RATE_EXPORT,7],[EXCHANGE_RATE_EXPORT,0],[EXCHANGE_RATE_EXPORT,5],[EXCHANGE_RATE_EXPORT,6],[EXCHANGE_RATE_EXPORT,4],[EXCHANGE_RATE_EXPORT,3]
 (kafka.controller.KafkaController)
{noformat}

> Topic stop working / can't describe topic
> -
>
> Key: KAFKA-3400
> URL: https://issues.apache.org/jira/browse/KAFKA-3400
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Tobias
>Assignee: Ashish K Singh
> Fix For: 0.10.1.0
>
>
> we are seeing an issue were we intermittently (every couple of hours) get and 
> error with certain topics. They stop working and producers give a 
> LeaderNotFoundException.
> When we then try to use kafka-topics.sh to describe the topic we get the 
> error below.
> Error while executing topic command : next on empty iterator
> {{
> [2016-03-15 17:30:26,231] ERROR java.util.NoSuchElementException: next on 
> empty iterator
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)
>   at scala.collection.IterableLike$class.head(IterableLike.scala:91)
>   at scala.collection.AbstractIterable.head(Iterable.scala:54)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:198)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:188)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:188)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:66)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> }}
> if we delete the topic, then it will start to work again for a while
> We can't see anything obvious in the logs but are happy to provide if needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3135) Unexpected delay before fetch response transmission

2016-03-23 Thread Raghu Angadi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208512#comment-15208512
 ] 

Raghu Angadi commented on KAFKA-3135:
-

[~hachikuji], mine is also on Linux with Java8 (docker containers on GCE). In 
one of the tests with default receive buffers, select() took alternated between 
waiting 0 ms and 20-30ms.. reading a few KB each time.

> Unexpected delay before fetch response transmission
> ---
>
> Key: KAFKA-3135
> URL: https://issues.apache.org/jira/browse/KAFKA-3135
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.1
>
>
> From the user list, Krzysztof Ciesielski reports the following:
> {quote}
> Scenario description:
> First, a producer writes 50 elements into a topic
> Then, a consumer starts to read, polling in a loop.
> When "max.partition.fetch.bytes" is set to a relatively small value, each
> "consumer.poll()" returns a batch of messages.
> If this value is left as default, the output tends to look like this:
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> As we can see, there are weird "gaps" when poll returns 0 elements for some
> time. What is the reason for that? Maybe there are some good practices
> about setting "max.partition.fetch.bytes" which I don't follow?
> {quote}
> The gist to reproduce this problem is here: 
> https://gist.github.com/kciesielski/054bb4359a318aa17561.
> After some initial investigation, the delay appears to be in the server's 
> networking layer. Basically I see a delay of 5 seconds from the time that 
> Selector.send() is invoked in SocketServer.Processor with the fetch response 
> to the time that the send is completed. Using netstat in the middle of the 
> delay shows the following output:
> {code}
> tcp4   0  0  10.191.0.30.55455  10.191.0.30.9092   ESTABLISHED
> tcp4   0 102400  10.191.0.30.9092   10.191.0.30.55454  ESTABLISHED
> {code}
> From this, it looks like the data reaches the send buffer, but needs to be 
> flushed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3135) Unexpected delay before fetch response transmission

2016-03-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3135:
---
Fix Version/s: 0.10.0.1

> Unexpected delay before fetch response transmission
> ---
>
> Key: KAFKA-3135
> URL: https://issues.apache.org/jira/browse/KAFKA-3135
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.1
>
>
> From the user list, Krzysztof Ciesielski reports the following:
> {quote}
> Scenario description:
> First, a producer writes 50 elements into a topic
> Then, a consumer starts to read, polling in a loop.
> When "max.partition.fetch.bytes" is set to a relatively small value, each
> "consumer.poll()" returns a batch of messages.
> If this value is left as default, the output tends to look like this:
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> As we can see, there are weird "gaps" when poll returns 0 elements for some
> time. What is the reason for that? Maybe there are some good practices
> about setting "max.partition.fetch.bytes" which I don't follow?
> {quote}
> The gist to reproduce this problem is here: 
> https://gist.github.com/kciesielski/054bb4359a318aa17561.
> After some initial investigation, the delay appears to be in the server's 
> networking layer. Basically I see a delay of 5 seconds from the time that 
> Selector.send() is invoked in SocketServer.Processor with the fetch response 
> to the time that the send is completed. Using netstat in the middle of the 
> delay shows the following output:
> {code}
> tcp4   0  0  10.191.0.30.55455  10.191.0.30.9092   ESTABLISHED
> tcp4   0 102400  10.191.0.30.9092   10.191.0.30.55454  ESTABLISHED
> {code}
> From this, it looks like the data reaches the send buffer, but needs to be 
> flushed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-03-23 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208491#comment-15208491
 ] 

Jun Rao commented on KAFKA-3296:


[~thecoop1984], thanks for the results. We will need to figure out why there is 
no leader for the __consumer_offsets. Do you see any ZK session expiration in 
the broker log (search for Expired)?

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
> Attachments: controller.zip, kafkalogs.zip
>
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> 

[GitHub] kafka pull request: KAFKA-3442: Fix FileMessageSet iterator.

2016-03-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1112


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-23 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208459#comment-15208459
 ] 

Jun Rao commented on KAFKA-3442:


[~dana.powers], this issue is fixed now. Could you verify that with your python 
client? Thanks,

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-23 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3442:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1112
[https://github.com/apache/kafka/pull/1112]

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208454#comment-15208454
 ] 

ASF GitHub Bot commented on KAFKA-3442:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1112


> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >