Re: group protocol/metadata documentation

2015-11-25 Thread Dana Powers
Thanks, Jason. I see the range and roundrobin assignment strategies
documented in the source. I don't see userdata used by either -- is that
correct (I may be misreading)? The notes suggest userdata for something
more detailed in the future, like rack-aware placements?

One other question: in what circumstances would consumer processes in a
single group want to use different topic subscriptions rather than
configure a new group?

Thanks again,

-Dana
On Nov 25, 2015 8:59 AM, "Jason Gustafson"  wrote:

> Hey Dana,
>
> Have a look at this wiki, which has more detail on the consumer's embedded
> protocol:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
> .
>
> At the moment, the group protocol supports consumer groups and kafka
> connect groups. Kafka tooling currently depends on the structure for these
> protocol types, so reuse of the same names might cause problems. I will
> look into updating the protocol documentation to standardize the protocol
> formats that are in use and to provide guidance for client implementations.
> My own view is that unless there's a good reason not to, all consumer
> implementation should use the same consumer protocol format so that tooling
> will work correctly.
>
> Thanks,
> Jason
>
>
>
> On Tue, Nov 24, 2015 at 4:16 PM, Dana Powers 
> wrote:
>
> > Hi all - I've been reading through the wiki docs and mailing list threads
> > for the new JoinGroup/SyncGroup/Heartbeat APIs, hoping to add
> functionality
> > to the python driver. It appears that there is a shared notion of group
> > "protocols" (client publishes supported protocols, coordinator picks
> > protocol for group to use), and their associated metadata. Is there any
> > documentation available for existing protocols? Will there be an official
> > place to document that supporting protocol X means foo? I think I can
> > probably construct a simple working protocol, but if I pick a protocol
> name
> > that already exists, will things break?
> >
> > -Dana
> >
>


[jira] [Updated] (KAFKA-2771) Add Rolling Upgrade to Secured Cluster to System Tests

2015-11-25 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-2771:

Summary: Add Rolling Upgrade to Secured Cluster to System Tests  (was: Add 
SSL Rolling Upgrade Test to System Tests)

> Add Rolling Upgrade to Secured Cluster to System Tests
> --
>
> Key: KAFKA-2771
> URL: https://issues.apache.org/jira/browse/KAFKA-2771
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Ensure we can perform a rolling upgrade to enable SSL on a running cluster
> *Method*
> - Start with 0.9.0 cluster with SSL disabled
> - Upgrade to Client and Inter-Broker ports to SSL (This will take two rounds 
> bounces. One to open the SSL port and one to close the PLAINTEXT port)
> - Ensure you can produce  (acks = -1) and consume during the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2771) Add Rolling Upgrade to Secured Cluster to System Tests

2015-11-25 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-2771:

Description: 
Ensure we can perform a rolling upgrade to enable SSL, SASL_PLAINTEXT &  on a 
running cluster
*Method*

- Start with 0.9.0 cluster with security disabled
- Upgrade to Client and Inter-Broker ports to SSL (This will take two rounds 
bounces. One to open the SSL port and one to close the PLAINTEXT port)
- Ensure you can produce  (acks = -1) and consume during the process. 


  was:
Ensure we can perform a rolling upgrade to enable SSL on a running cluster
*Method*

- Start with 0.9.0 cluster with SSL disabled
- Upgrade to Client and Inter-Broker ports to SSL (This will take two rounds 
bounces. One to open the SSL port and one to close the PLAINTEXT port)
- Ensure you can produce  (acks = -1) and consume during the process. 



> Add Rolling Upgrade to Secured Cluster to System Tests
> --
>
> Key: KAFKA-2771
> URL: https://issues.apache.org/jira/browse/KAFKA-2771
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Ensure we can perform a rolling upgrade to enable SSL, SASL_PLAINTEXT &  on a 
> running cluster
> *Method*
> - Start with 0.9.0 cluster with security disabled
> - Upgrade to Client and Inter-Broker ports to SSL (This will take two rounds 
> bounces. One to open the SSL port and one to close the PLAINTEXT port)
> - Ensure you can produce  (acks = -1) and consume during the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2890) Strange behaviour during partitions reassignment.

2015-11-25 Thread Alexander Kukushkin (JIRA)
Alexander Kukushkin created KAFKA-2890:
--

 Summary: Strange behaviour during partitions reassignment.
 Key: KAFKA-2890
 URL: https://issues.apache.org/jira/browse/KAFKA-2890
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Alexander Kukushkin


Hi.

I am playing with the new version of kafka (0.9.0.0).
Initially I've created cluster of 3 nodes, and created some topics there. Later 
I've added one more node and triggered partitions reassignment. It's kind of 
working, but on the new node in the log file there are strange warnings:

[2015-11-25 14:06:52,998] WARN [ReplicaFetcherThread-1-152], Error in fetch 
kafka.server.ReplicaFetcherThread$FetchRequest@3f442c7b. Possible cause: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
(kafka.server.ReplicaFetcherThread)

I've found similar log messages in the following ticket: 
https://issues.apache.org/jira/browse/KAFKA-2756
But there such messages were related to the replication between different 
versions (0.8 and 0.9).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Improve broker id documentation

2015-11-25 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/585

MINOR: Improve broker id documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka brokerid

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/585.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #585


commit 5b4b56bbb090f9fdce50950db5d207b7c35a5a52
Author: Grant Henke 
Date:   2015-11-25T14:52:02Z

MINOR: Improve broker id documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-25 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026907#comment-15026907
 ] 

Ben Stopford commented on KAFKA-2891:
-

Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 


> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,342] INFO Marking the coordinator 2147483644 dead. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,958] INFO Attempt to join group test-consumer-group 
> failed due to unknown member id, resetting and retrying. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:42,437] INFO Fetch offset null is out of range, resetting 
> offset (org.apache.kafka.clients.consumer.internals.Fetcher)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-25 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-2891:
-

 Summary: Gaps in messages delivered by new consumer after Kafka 
restart
 Key: KAFKA-2891
 URL: https://issues.apache.org/jira/browse/KAFKA-2891
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.9.0.0
Reporter: Rajini Sivaram
Priority: Critical


Replication tests when run with the new consumer with SSL/SASL were failing 
very often because messages were not being consumed from some topics after a 
Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am still 
seeing some failures (less often now) because a small set of messages are not 
received after Kafka restart. This failure looks slightly different from the 
one before the fix for KAFKA-2877 was applied, hence the new defect. The test 
fails because not all acked messages are received by the consumer, and the 
number of messages missing are quite small.

[~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?

Not sure if any of these log entries are important:
{quote}
[2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 14:41:12,342] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 14:41:12,958] INFO Attempt to join group test-consumer-group failed 
due to unknown member id, resetting and retrying. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 14:41:42,437] INFO Fetch offset null is out of range, resetting 
offset (org.apache.kafka.clients.consumer.internals.Fetcher)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2890) Strange behaviour during partitions reassignment.

2015-11-25 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026910#comment-15026910
 ] 

Ismael Juma commented on KAFKA-2890:


Thanks for the report. Would you be able to list the commands you executed in 
order for us to reproduce it?

> Strange behaviour during partitions reassignment.
> -
>
> Key: KAFKA-2890
> URL: https://issues.apache.org/jira/browse/KAFKA-2890
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Alexander Kukushkin
>
> Hi.
> I am playing with the new version of kafka (0.9.0.0).
> Initially I've created cluster of 3 nodes, and created some topics there. 
> Later I've added one more node and triggered partitions reassignment. It's 
> kind of working, but on the new node in the log file there are strange 
> warnings:
> [2015-11-25 14:06:52,998] WARN [ReplicaFetcherThread-1-152], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@3f442c7b. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> I've found similar log messages in the following ticket: 
> https://issues.apache.org/jira/browse/KAFKA-2756
> But there such messages were related to the replication between different 
> versions (0.8 and 0.9).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-25 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026907#comment-15026907
 ] 

Ben Stopford edited comment on KAFKA-2891 at 11/25/15 3:38 PM:
---

Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 

The five messages were produced at 11:29:19. Interestingly this time 
corresponds to the consumer's first error messages (below). These come a few 
secs after the second node (of 3) is shutdown.

[2015-11-25 11:28:55,958] INFO Kafka version : 0.9.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:28:55,958] INFO Kafka commitId : 6f3c8e2c5079f00e 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:29:02,628] INFO Attempt to heart beat failed since member id is 
not valid, reset it and try to re-join group. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:02,649] ERROR Error ILLEGAL_GENERATION occurred while 
committing offsets for group unique-test-group-0.206159604113 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:02,649] WARN Auto offset commit failed:  
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:19,376] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,383] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,386] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)


was (Author: benstopford):
Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 


> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,342] INFO Marking the coordinator 2147483644 dead. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,958] INFO Attempt to join group test-consumer-group 
> failed due to unknown member id, resetting and retrying. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:42,437] INFO Fetch offset null is out of range, resetting 
> offset (org.apache.kafka.clients.consumer.internals.Fetcher)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-25 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026907#comment-15026907
 ] 

Ben Stopford edited comment on KAFKA-2891 at 11/25/15 3:39 PM:
---

Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 

The five messages were produced at 11:29:19. Interestingly this time 
corresponds to the consumer's first error messages (below). These come a few 
secs after the second node (of 3) is shutdown.

{quote}
[2015-11-25 11:28:55,958] INFO Kafka version : 0.9.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:28:55,958] INFO Kafka commitId : 6f3c8e2c5079f00e 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:29:02,628] INFO Attempt to heart beat failed since member id is 
not valid, reset it and try to re-join group. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:02,649] ERROR Error ILLEGAL_GENERATION occurred while 
committing offsets for group unique-test-group-0.206159604113 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:02,649] WARN Auto offset commit failed:  
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:19,376] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,383] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,386] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
{quote}


was (Author: benstopford):
Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 

The five messages were produced at 11:29:19. Interestingly this time 
corresponds to the consumer's first error messages (below). These come a few 
secs after the second node (of 3) is shutdown.

[2015-11-25 11:28:55,958] INFO Kafka version : 0.9.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:28:55,958] INFO Kafka commitId : 6f3c8e2c5079f00e 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:29:02,628] INFO Attempt to heart beat failed since member id is 
not valid, reset it and try to re-join group. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:02,649] ERROR Error ILLEGAL_GENERATION occurred while 
committing offsets for group unique-test-group-0.206159604113 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:02,649] WARN Auto offset commit failed:  
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:19,376] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,383] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,386] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)

> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
> (org.apache.kafka.clients.consumer.in

[jira] [Comment Edited] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-25 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026907#comment-15026907
 ] 

Ben Stopford edited comment on KAFKA-2891 at 11/25/15 3:40 PM:
---

Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 

The five messages were produced at 11:29:19. Interestingly this time 
corresponds to the consumer's first notifications that the coordinator is dead 
(below). These come a few secs after the second node (of 3) is shutdown.

{quote}
[2015-11-25 11:28:55,958] INFO Kafka version : 0.9.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:28:55,958] INFO Kafka commitId : 6f3c8e2c5079f00e 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:29:02,628] INFO Attempt to heart beat failed since member id is 
not valid, reset it and try to re-join group. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:02,649] ERROR Error ILLEGAL_GENERATION occurred while 
committing offsets for group unique-test-group-0.206159604113 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:02,649] WARN Auto offset commit failed:  
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:19,376] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,383] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,386] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
{quote}


was (Author: benstopford):
Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 

The five messages were produced at 11:29:19. Interestingly this time 
corresponds to the consumer's first error messages (below). These come a few 
secs after the second node (of 3) is shutdown.

{quote}
[2015-11-25 11:28:55,958] INFO Kafka version : 0.9.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:28:55,958] INFO Kafka commitId : 6f3c8e2c5079f00e 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:29:02,628] INFO Attempt to heart beat failed since member id is 
not valid, reset it and try to re-join group. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:02,649] ERROR Error ILLEGAL_GENERATION occurred while 
committing offsets for group unique-test-group-0.206159604113 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:02,649] WARN Auto offset commit failed:  
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:19,376] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,383] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,386] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
{quote}

> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rej

kafka 0.8 producer issue

2015-11-25 Thread Kudumula, Surender
Hi all
I am trying to get the producer working. It was working before but now getting 
the following issue. I have created a new topic as well just in case if it was 
the issue with topic but still no luck. I have increased the message size in 
broker as iam trying to send atleast 3mb message here in byte array format. Any 
suggestion please???

2015-11-25 15:46:11 INFO  Login:185 - TGT refresh sleeping until: Tue Dec 01 
07:03:07 GMT 2015
2015-11-25 15:46:11 INFO  KafkaProducer:558 - Closing the Kafka producer with 
timeoutMillis = 9223372036854775807 ms.
2015-11-25 15:46:12 ERROR RecordBatch:96 - Error executing user-provided 
callback on message for topic-partition ResponseTopic-0:
java.lang.NullPointerException
at 
com.hpe.ssmamp.kafka.KafkaPDFAProducer$1.onCompletion(KafkaPDFAProducer.java:62)
at 
org.apache.kafka.clients.producer.internals.RecordBatch.done(RecordBatch.java:93)
at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:285)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:253)
at 
org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:55)
at 
org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:328)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:237)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:140)
at java.lang.Thread.run(Thread.java:745)





[jira] [Comment Edited] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-25 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026907#comment-15026907
 ] 

Ben Stopford edited comment on KAFKA-2891 at 11/25/15 4:01 PM:
---

Yes. I get something similar. Worked fine for about six runs then got a run 
with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 

The five messages were produced at 11:29:19. Interestingly this time 
corresponds to the consumer's first notifications that the coordinator is dead 
(below). These come a few secs after the second node (of 3) is shutdown.

{quote}
[2015-11-25 11:28:55,958] INFO Kafka version : 0.9.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:28:55,958] INFO Kafka commitId : 6f3c8e2c5079f00e 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:29:02,628] INFO Attempt to heart beat failed since member id is 
not valid, reset it and try to re-join group. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:02,649] ERROR Error ILLEGAL_GENERATION occurred while 
committing offsets for group unique-test-group-0.206159604113 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:02,649] WARN Auto offset commit failed:  
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:19,376] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,383] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,386] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
{quote}


was (Author: benstopford):
Yes. I get exactly the same. Worked fine for about six runs then got a run with:

At least one acked message did not appear in the consumed messages. 
acked_minus_consumed: set([29073, 29067, 29076, 29070, 29079])

Which i have not seen before (i.e. just a few messages missing). 

The five messages were produced at 11:29:19. Interestingly this time 
corresponds to the consumer's first notifications that the coordinator is dead 
(below). These come a few secs after the second node (of 3) is shutdown.

{quote}
[2015-11-25 11:28:55,958] INFO Kafka version : 0.9.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:28:55,958] INFO Kafka commitId : 6f3c8e2c5079f00e 
(org.apache.kafka.common.utils.AppInfoParser)
[2015-11-25 11:29:02,628] INFO Attempt to heart beat failed since member id is 
not valid, reset it and try to re-join group. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:02,649] ERROR Error ILLEGAL_GENERATION occurred while 
committing offsets for group unique-test-group-0.206159604113 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:02,649] WARN Auto offset commit failed:  
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2015-11-25 11:29:19,376] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,383] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2015-11-25 11:29:19,386] INFO Marking the coordinator 2147483644 dead. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
{quote}

> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, wil

[GitHub] kafka pull request: KAFKA-2878: Guard against OutOfMemory in Kafka...

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/577


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2878) Kafka broker throws OutOfMemory exception with invalid join group request

2015-11-25 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2878.

   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 577
[https://github.com/apache/kafka/pull/577]

> Kafka broker throws OutOfMemory exception with invalid join group request
> -
>
> Key: KAFKA-2878
> URL: https://issues.apache.org/jira/browse/KAFKA-2878
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> Array allocation for join group request doesn't have any checks and hence can 
> result in OutOfMemory exception in the broker. Array size from the request 
> should be validated to avoid DoS attacks on a secure installation of Kafka.
> {quote}
> at org/apache/kafka/common/protocol/types/ArrayOf.read(ArrayOf.java:44)
> at org/apache/kafka/common/protocol/types/Schema.read(Schema.java:69)
> at 
> org/apache/kafka/common/protocol/ProtoUtils.parseRequest(ProtoUtils.java:60)
> at 
> org/apache/kafka/common/requests/JoinGroupRequest.parse(JoinGroupRequest.java:144)
> at 
> org/apache/kafka/common/requests/AbstractRequest.getRequest(AbstractRequest.java:55)
>  
> at kafka/network/RequestChannel$Request.(RequestChannel.scala:78)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2878) Kafka broker throws OutOfMemory exception with invalid join group request

2015-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027108#comment-15027108
 ] 

ASF GitHub Bot commented on KAFKA-2878:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/577


> Kafka broker throws OutOfMemory exception with invalid join group request
> -
>
> Key: KAFKA-2878
> URL: https://issues.apache.org/jira/browse/KAFKA-2878
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> Array allocation for join group request doesn't have any checks and hence can 
> result in OutOfMemory exception in the broker. Array size from the request 
> should be validated to avoid DoS attacks on a secure installation of Kafka.
> {quote}
> at org/apache/kafka/common/protocol/types/ArrayOf.read(ArrayOf.java:44)
> at org/apache/kafka/common/protocol/types/Schema.read(Schema.java:69)
> at 
> org/apache/kafka/common/protocol/ProtoUtils.parseRequest(ProtoUtils.java:60)
> at 
> org/apache/kafka/common/requests/JoinGroupRequest.parse(JoinGroupRequest.java:144)
> at 
> org/apache/kafka/common/requests/AbstractRequest.getRequest(AbstractRequest.java:55)
>  
> at kafka/network/RequestChannel$Request.(RequestChannel.scala:78)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[ANNOUNCE] CFP open for ApacheCon North America 2016

2015-11-25 Thread Rich Bowen
Community growth starts by talking with those interested in your
project. ApacheCon North America is coming, are you?

We are delighted to announce that the Call For Presentations (CFP) is
now open for ApacheCon North America. You can submit your proposed
sessions at
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp
for big data talks and
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp
for all other topics.

ApacheCon North America will be held in Vancouver, Canada, May 9-13th
2016. ApacheCon has been running every year since 2000, and is the place
to build your project communities.

While we will consider individual talks we prefer to see related
sessions that are likely to draw users and community members. When
submitting your talk work with your project community and with related
communities to come up with a full program that will walk attendees
through the basics and on into mastery of your project in example use
cases. Content that introduces what's new in your latest release is also
of particular interest, especially when it builds upon existing well
know application models. The goal should be to showcase your project in
ways that will attract participants and encourage engagement in your
community, Please remember to involve your whole project community (user
and dev lists) when building content. This is your chance to create a
project specific event within the broader ApacheCon conference.

Content at ApacheCon North America will be cross-promoted as
mini-conferences, such as ApacheCon Big Data, and ApacheCon Mobile, so
be sure to indicate which larger category your proposed sessions fit into.

Finally, please plan to attend ApacheCon, even if you're not proposing a
talk. The biggest value of the event is community building, and we count
on you to make it a place where your project community is likely to
congregate, not just for the technical content in sessions, but for
hackathons, project summits, and good old fashioned face-to-face networking.

-- 
rbo...@apache.org
http://apache.org/


RE: kafka 0.8 producer issue

2015-11-25 Thread Kudumula, Surender
Hi prabhjot
I just realized its not producing to kafka that’s why iam getting this null 
pointer. In my localhost I can see kafka info/debug logs. But in HDP cluster I 
cannot see any exceptions. How should I configure the logger to show exceptions 
in kafka log files thanks



-Original Message-
From: Prabhjot Bharaj [mailto:prabhbha...@gmail.com] 
Sent: 25 November 2015 17:35
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: Re: kafka 0.8 producer issue

Hi,

From the information that you've provided, I think your callback is the culprit 
here. It is seen from the stacktrace:-

at com.hpe.ssmamp.kafka.KafkaPDFAProducer$1.onCompletion(
KafkaPDFAProducer.java:62)

Please provide more information like a code snippet etc, so that we can tell 
more

Thanks,
Prabhjot

On Wed, Nov 25, 2015 at 9:24 PM, Kudumula, Surender < 
surender.kudum...@hpe.com> wrote:

> Hi all
> I am trying to get the producer working. It was working before but now 
> getting the following issue. I have created a new topic as well just 
> in case if it was the issue with topic but still no luck. I have 
> increased the message size in broker as iam trying to send atleast 3mb 
> message here in byte array format. Any suggestion please???
>
> 2015-11-25 15:46:11 INFO  Login:185 - TGT refresh sleeping until: Tue 
> Dec
> 01 07:03:07 GMT 2015
> 2015-11-25 15:46:11 INFO  KafkaProducer:558 - Closing the Kafka 
> producer with timeoutMillis = 9223372036854775807 ms.
> 2015-11-25 15:46:12 ERROR RecordBatch:96 - Error executing 
> user-provided callback on message for topic-partition ResponseTopic-0:
> java.lang.NullPointerException
> at
> com.hpe.ssmamp.kafka.KafkaPDFAProducer$1.onCompletion(KafkaPDFAProducer.java:62)
> at
> org.apache.kafka.clients.producer.internals.RecordBatch.done(RecordBatch.java:93)
> at
> org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:285)
> at
> org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:253)
> at
> org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:55)
> at
> org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:328)
> at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:237)
> at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)
> at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:140)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>


--
-
"There are only 10 types of people in the world: Those who understand binary, 
and those who don't"


[jira] [Commented] (KAFKA-2799) WakupException thrown in the followup poll() could lead to data loss

2015-11-25 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027255#comment-15027255
 ] 

Grant Henke commented on KAFKA-2799:


Was this supposed to make it into 0.9.0.0? I don't see the commit in the 
branch. 

> WakupException thrown in the followup poll() could lead to data loss
> 
>
> Key: KAFKA-2799
> URL: https://issues.apache.org/jira/browse/KAFKA-2799
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The common pattern of the new consumer:
> {code}
> try {
>records = consumer.poll();
>// process records
> } catch (WakeupException) {
>consumer.close()
> }
> {code}
> in which the close() can commit offsets. But since in the poll() call, we do 
> the following order:
> 1) trigger client.poll().
> 2) possibly update consumed position if there are some data from fetch 
> response.
> 3) before return the records, possibly trigger another client.poll()
> And if wakeup exception is thrown in 3) it will lead to not-returned messages 
> to be committed hence data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: kafka 0.8 producer issue

2015-11-25 Thread Prabhjot Bharaj
Hi,

>From the information that you've provided, I think your callback is the
culprit here. It is seen from the stacktrace:-

at com.hpe.ssmamp.kafka.KafkaPDFAProducer$1.onCompletion(
KafkaPDFAProducer.java:62)

Please provide more information like a code snippet etc, so that we can
tell more

Thanks,
Prabhjot

On Wed, Nov 25, 2015 at 9:24 PM, Kudumula, Surender <
surender.kudum...@hpe.com> wrote:

> Hi all
> I am trying to get the producer working. It was working before but now
> getting the following issue. I have created a new topic as well just in
> case if it was the issue with topic but still no luck. I have increased the
> message size in broker as iam trying to send atleast 3mb message here in
> byte array format. Any suggestion please???
>
> 2015-11-25 15:46:11 INFO  Login:185 - TGT refresh sleeping until: Tue Dec
> 01 07:03:07 GMT 2015
> 2015-11-25 15:46:11 INFO  KafkaProducer:558 - Closing the Kafka producer
> with timeoutMillis = 9223372036854775807 ms.
> 2015-11-25 15:46:12 ERROR RecordBatch:96 - Error executing user-provided
> callback on message for topic-partition ResponseTopic-0:
> java.lang.NullPointerException
> at
> com.hpe.ssmamp.kafka.KafkaPDFAProducer$1.onCompletion(KafkaPDFAProducer.java:62)
> at
> org.apache.kafka.clients.producer.internals.RecordBatch.done(RecordBatch.java:93)
> at
> org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:285)
> at
> org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:253)
> at
> org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:55)
> at
> org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:328)
> at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:237)
> at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:209)
> at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:140)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>


-- 
-
"There are only 10 types of people in the world: Those who understand
binary, and those who don't"


[jira] [Closed] (KAFKA-2800) Update outdated dependencies

2015-11-25 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke closed KAFKA-2800.
--

> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #184

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2878; Guard against OutOfMemory in Kafka broker

--
[...truncated 1401 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testV

[jira] [Commented] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-11-25 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027301#comment-15027301
 ] 

Ben Stopford commented on KAFKA-2891:
-

So I'm starting to think the problem may be related to 
https://issues.apache.org/jira/browse/KAFKA-2827 (in my case at least). There 
are periods where the ISR drops to 1 which it shouldn't do during a clean 
bounce. Adding artificial pauses between node restarts also appears to remove 
the problem. Not definitive yet. Just a heads up.  

> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,342] INFO Marking the coordinator 2147483644 dead. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,958] INFO Attempt to join group test-consumer-group 
> failed due to unknown member id, resetting and retrying. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:42,437] INFO Fetch offset null is out of range, resetting 
> offset (org.apache.kafka.clients.consumer.internals.Fetcher)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2887) TopicMetadataRequest creates topic if it does not exist

2015-11-25 Thread Andrew Winterman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027357#comment-15027357
 ] 

Andrew Winterman commented on KAFKA-2887:
-

my only complaint with this approach is that it leaves people on 0.8 in a 
lurch. If we make the configuration values for the broker more fine grained, 
meaning have one configuration value for "creates topics on produce request" 
and one for "creates topics on consume/metadata request" then that both 
preserves api continuity (or are configuration values a part of the API?) and 
solves the problem we're facing.

> TopicMetadataRequest creates topic if it does not exist
> ---
>
> Key: KAFKA-2887
> URL: https://issues.apache.org/jira/browse/KAFKA-2887
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.0
> Environment: Centos6, Java 1.7.0_75
>Reporter: Andrew Winterman
>Priority: Minor
>
> We wired up a probe http endpoint to make TopicMetadataRequests with a 
> possible topic name. If no topic was found, we expected an empty response. 
> However if we asked for the same topic twice, it would exist the second time!
> I think this is a bug because the purpose of the TopicMetadaRequest is to 
> provide  information about the cluster, not mutate it. I can provide example 
> code if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread Jesse Anderson (JIRA)
Jesse Anderson created KAFKA-2892:
-

 Summary: Consumer Docs Use Wrong Method
 Key: KAFKA-2892
 URL: https://issues.apache.org/jira/browse/KAFKA-2892
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.0
Reporter: Jesse Anderson


The KafkaConsumer docs use a non-existent method for assigning partitions 
({{consumer.assign}}).

The JavaDocs show as:
 String topic = "foo";
 TopicPartition partition0 = new TopicPartition(topic, 0);
 TopicPartition partition1 = new TopicPartition(topic, 1);
 consumer.assign(partition0);
 consumer.assign(partition1);

Should be:
 String topic = "foo";
 TopicPartition partition0 = new TopicPartition(topic, 0);
 TopicPartition partition1 = new TopicPartition(topic, 1);
 consumer.assign(Arrays.asList(partition0));
 consumer.assign(Arrays.asList(partition1));




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2893) Add Negative Partition Seek Check

2015-11-25 Thread Jesse Anderson (JIRA)
Jesse Anderson created KAFKA-2893:
-

 Summary: Add Negative Partition Seek Check
 Key: KAFKA-2893
 URL: https://issues.apache.org/jira/browse/KAFKA-2893
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.0
Reporter: Jesse Anderson


When adding add seek that is a negative number, there isn't a check. When you 
do give a negative number, you get the following output:
{{2015-11-25 13:54:16 INFO  Fetcher:567 - Fetch offset null is out of range, 
resetting offset}}

Code to replicate:
KafkaConsumer consumer = new KafkaConsumer(props);
TopicPartition partition = new TopicPartition(topic, 0);
consumer.assign(Arrays.asList(partition));
consumer.seek(partition, -1);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: All brokers are running but some partitions' leader is -1

2015-11-25 Thread Qi Xu
Hi Gwen,
Yes, we're going to upgrade the 0.9.0 version. Regarding the upgrade, we
definitely don't want to have down time of our cluster.
So the upgrade will be machine by machine. Will the release 0.9.0 work with
the Aug's version together in the same Kafka cluster?
Also we currently run spark streaming job (with scala 2.10) against the
cluster. Any known issues of 0.9.0 are you aware of under this scenario?

Thanks,
Tony


On Mon, Nov 23, 2015 at 5:41 PM, Gwen Shapira  wrote:

> We fixed many many bugs since August. Since we are about to release 0.9.0
> (with SSL!), maybe wait a day and go with a released and tested version.
>
> On Mon, Nov 23, 2015 at 3:01 PM, Qi Xu  wrote:
>
> > Forgot to mention is that the Kafka version we're using is from Aug's
> > Trunk branch---which has the SSL support.
> >
> > Thanks again,
> > Qi
> >
> >
> > On Mon, Nov 23, 2015 at 2:29 PM, Qi Xu  wrote:
> >
> >> Loop another guy from our team.
> >>
> >> On Mon, Nov 23, 2015 at 2:26 PM, Qi Xu  wrote:
> >>
> >>> Hi folks,
> >>> We have a 10 node cluster and have several topics. Each topic has about
> >>> 256 partitions with 3 replica factor. Now we run into an issue that in
> some
> >>> topic, a few partition (< 10)'s leader is -1 and all of them has only
> one
> >>> synced partition.
> >>>
> >>> From the Kafka manager, here's the snapshot:
> >>> [image: Inline image 2]
> >>>
> >>> [image: Inline image 1]
> >>>
> >>> here's the state log:
> >>> [2015-11-23 21:57:58,598] ERROR Controller 1 epoch 435499 initiated
> >>> state change for partition [userlogs,84] from OnlinePartition to
> >>> OnlinePartition failed (state.change.logger)
> >>> kafka.common.StateChangeFailedException: encountered error while
> >>> electing leader for partition [userlogs,84] due to: Preferred replica
> 0 for
> >>> partition [userlogs,84] is either not alive or not in the isr. Current
> >>> leader and ISR: [{"leader":-1,"leader_epoch":203,"isr":[1]}].
> >>> Caused by: kafka.common.StateChangeFailedException: Preferred replica 0
> >>> for partition [userlogs,84] is either not alive or not in the isr.
> Current
> >>> leader and ISR: [{"leader":-1,"leader_epoch":203,"isr":[1]}]
> >>>
> >>> My question is:
> >>> 1) how could this happen and how can I fix it or work around it?
> >>> 2) Is 256 partitions too big? We have about 200+ cores for spark
> >>> streaming job.
> >>>
> >>> Thanks,
> >>> Qi
> >>>
> >>>
> >>
> >
>


Release source doesn't contain .gitignore

2015-11-25 Thread Grant Henke
The recent release source does not contain the repo's .gitignore file. This
is definitely not a major issue, but I wanted to bring it up so it could be
added in future releases.

I downloaded the source from:
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.0/kafka-0.9.0.0-src.tgz

Thanks,
Grant
-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[GitHub] kafka pull request: HOTFIX: fix StreamTask.close()

2015-11-25 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/586

HOTFIX: fix StreamTask.close()

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka fix_streamtask_close

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/586.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #586


commit f74ac68eb1a092e7b1abaa0fbda7ef0e86405f77
Author: Yasuhiro Matsuda 
Date:   2015-11-25T19:55:55Z

HOTFIX: fix StreamTask.close()




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2799) WakupException thrown in the followup poll() could lead to data loss

2015-11-25 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027502#comment-15027502
 ] 

Guozhang Wang commented on KAFKA-2799:
--

[~granthenke] Thanks for reporting this. It is in fact missed in 0.9.0 branch, 
I have just cherry picked from trunk to be included in the 0.9.0.1 release.

> WakupException thrown in the followup poll() could lead to data loss
> 
>
> Key: KAFKA-2799
> URL: https://issues.apache.org/jira/browse/KAFKA-2799
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.1
>
>
> The common pattern of the new consumer:
> {code}
> try {
>records = consumer.poll();
>// process records
> } catch (WakeupException) {
>consumer.close()
> }
> {code}
> in which the close() can commit offsets. But since in the poll() call, we do 
> the following order:
> 1) trigger client.poll().
> 2) possibly update consumed position if there are some data from fetch 
> response.
> 3) before return the records, possibly trigger another client.poll()
> And if wakeup exception is thrown in 3) it will lead to not-returned messages 
> to be committed hence data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2799) WakupException thrown in the followup poll() could lead to data loss

2015-11-25 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2799:
-
Fix Version/s: (was: 0.9.0.0)
   0.9.0.1

> WakupException thrown in the followup poll() could lead to data loss
> 
>
> Key: KAFKA-2799
> URL: https://issues.apache.org/jira/browse/KAFKA-2799
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.1
>
>
> The common pattern of the new consumer:
> {code}
> try {
>records = consumer.poll();
>// process records
> } catch (WakeupException) {
>consumer.close()
> }
> {code}
> in which the close() can commit offsets. But since in the poll() call, we do 
> the following order:
> 1) trigger client.poll().
> 2) possibly update consumed position if there are some data from fetch 
> response.
> 3) before return the records, possibly trigger another client.poll()
> And if wakeup exception is thrown in 3) it will lead to not-returned messages 
> to be committed hence data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: All brokers are running but some partitions' leader is -1

2015-11-25 Thread Gwen Shapira
1. Yes, you can do a rolling upgrade of brokers from 0.8.2 to 0.9.0. The
important thing is to upgrade the brokers before you upgrade any of the
clients.

2. I'm not aware of issues with 0.9.0 and SparkStreaming. However,
definitely do your own testing to make sure.

On Wed, Nov 25, 2015 at 11:25 AM, Qi Xu  wrote:

> Hi Gwen,
> Yes, we're going to upgrade the 0.9.0 version. Regarding the upgrade, we
> definitely don't want to have down time of our cluster.
> So the upgrade will be machine by machine. Will the release 0.9.0 work with
> the Aug's version together in the same Kafka cluster?
> Also we currently run spark streaming job (with scala 2.10) against the
> cluster. Any known issues of 0.9.0 are you aware of under this scenario?
>
> Thanks,
> Tony
>
>
> On Mon, Nov 23, 2015 at 5:41 PM, Gwen Shapira  wrote:
>
> > We fixed many many bugs since August. Since we are about to release 0.9.0
> > (with SSL!), maybe wait a day and go with a released and tested version.
> >
> > On Mon, Nov 23, 2015 at 3:01 PM, Qi Xu  wrote:
> >
> > > Forgot to mention is that the Kafka version we're using is from Aug's
> > > Trunk branch---which has the SSL support.
> > >
> > > Thanks again,
> > > Qi
> > >
> > >
> > > On Mon, Nov 23, 2015 at 2:29 PM, Qi Xu  wrote:
> > >
> > >> Loop another guy from our team.
> > >>
> > >> On Mon, Nov 23, 2015 at 2:26 PM, Qi Xu  wrote:
> > >>
> > >>> Hi folks,
> > >>> We have a 10 node cluster and have several topics. Each topic has
> about
> > >>> 256 partitions with 3 replica factor. Now we run into an issue that
> in
> > some
> > >>> topic, a few partition (< 10)'s leader is -1 and all of them has only
> > one
> > >>> synced partition.
> > >>>
> > >>> From the Kafka manager, here's the snapshot:
> > >>> [image: Inline image 2]
> > >>>
> > >>> [image: Inline image 1]
> > >>>
> > >>> here's the state log:
> > >>> [2015-11-23 21:57:58,598] ERROR Controller 1 epoch 435499 initiated
> > >>> state change for partition [userlogs,84] from OnlinePartition to
> > >>> OnlinePartition failed (state.change.logger)
> > >>> kafka.common.StateChangeFailedException: encountered error while
> > >>> electing leader for partition [userlogs,84] due to: Preferred replica
> > 0 for
> > >>> partition [userlogs,84] is either not alive or not in the isr.
> Current
> > >>> leader and ISR: [{"leader":-1,"leader_epoch":203,"isr":[1]}].
> > >>> Caused by: kafka.common.StateChangeFailedException: Preferred
> replica 0
> > >>> for partition [userlogs,84] is either not alive or not in the isr.
> > Current
> > >>> leader and ISR: [{"leader":-1,"leader_epoch":203,"isr":[1]}]
> > >>>
> > >>> My question is:
> > >>> 1) how could this happen and how can I fix it or work around it?
> > >>> 2) Is 256 partitions too big? We have about 200+ cores for spark
> > >>> streaming job.
> > >>>
> > >>> Thanks,
> > >>> Qi
> > >>>
> > >>>
> > >>
> > >
> >
>


[GitHub] kafka pull request: HOTFIX: fix StreamTask.close()

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/586


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka_0.9.0_jdk7 #43

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2799: skip wakeup in the follow-up poll() call.

--
[...truncated 214 lines...]
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
14 warnings found
:core:processResources UP-TO-DATE
:core:classes
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:core:copyDependantLibs UP-TO-DATE
:core:copyDependantTestLibs
:core:jar UP-TO-DATE
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:jar
:log4j-appender:compileJava
:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes
:log4j-appender:jar
:tools:compileJava
:tools:processResources UP-TO-DATE
:tools:classes
:clients:javadoc
:clients:javadocJar
:clients:srcJar
:clients:testJar
:clients:signArchives SKIPPED
:tools:copyDependantLibs
:tools:jar
:connect:api:compileJavaNote: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:api:processResources UP-TO-DATE
:connect:api:classes
:connect:api:copyDependantLibs
:connect:api:jar
:connect:file:compileJava
:connect:file:processResources UP-TO-DATE
:connect:file:classes
:connect:file:copyDependantLibs
:connect:file:jar
:connect:json:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:connect:json:processResources UP-TO-DATE
:connect:json:classes
:connect:json:copyDependantLibs
:connect:json:jar
:connect:runtime:compileJavaNote: Some input files use unchecked or unsafe 
operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:runtime:processResources UP-TO-DATE
:connect:runtime:classes
:connect:runtime:copyDependantLibs
:connect:runtime:jar
:jarAll
:test_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJava UP-TO-DATE
:kafka_0.9.0_jdk7:clients:processResources UP-TO-DATE
:kafka_0.9.0_jdk7:clients:classes UP-TO-DATE
:kafka_0.9.0_jdk7:clients:determineCommitId UP-TO-DATE
:kafka_0.9.0_jdk7:clients:createVersionFile
:kafka_0.9.0_jdk7:clients:jar UP-TO-DATE
:kafka_0.9.0_jdk7:clients:compileTestJava UP-TO-DATE
:kafka_0.9.0_jdk7:clients:processTestResources UP-TO-DATE
:kafka_0.9.0_jdk7:clients:testClasses UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileJava UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileScala UP-TO-DATE
:kafka_0.9.0_jdk7:core:processResources UP-TO-DATE
:kafka_0.9.0_jdk7:core:classes UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileTestJava UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileTestScala
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileTestScala' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of input files 
for task 'compileTestScala' during up-to-date check.  See stacktrace for 
details.
at 
org.gradle.api.internal.changedetection.rules.TaskUpToDateState.(TaskUpToDateState.java:59)
at 
org.gradle.api.internal.changedetection.changes.DefaultTaskArtifactStateRepository$TaskArtifactStateImpl.getStates(DefaultTaskArtifactStateRepository.java:126)
at 
org.gradle.api.internal.changedetection.changes.DefaultTaskArtifactStateRepository$TaskArtifactStateImpl.isUpToDate(DefaultTaskArtifactStateRepository.java:69)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:52)

Build failed in Jenkins: kafka-trunk-jdk8 #185

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: fix StreamTask.close()

--
[...truncated 6675 lines...]
org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateAndStop PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSourceConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSinkConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedH

[GitHub] kafka pull request: MINOR: change KStream processor names

2015-11-25 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/587

MINOR: change KStream processor names

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka kstream_processor_names

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/587.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #587


commit e256d9c08c359b108f5a55194f2ae885b2bf091e
Author: Yasuhiro Matsuda 
Date:   2015-11-25T21:18:01Z

MINOR: change KStream processor names




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Grammar on README.md

2015-11-25 Thread simplyianm
GitHub user simplyianm opened a pull request:

https://github.com/apache/kafka/pull/588

Grammar on README.md



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/simplyianm/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/588.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #588


commit a0c3279ec014183e62fd3903a09948a48461ed42
Author: Ian Macalinao 
Date:   2015-11-25T21:23:36Z

Grammar on README.md




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread Jesse Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Anderson updated KAFKA-2892:
--
Attachment: docspatch.diff

> Consumer Docs Use Wrong Method
> --
>
> Key: KAFKA-2892
> URL: https://issues.apache.org/jira/browse/KAFKA-2892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
> Fix For: 0.9.0.0
>
> Attachments: docspatch.diff
>
>
> The KafkaConsumer docs use a non-existent method for assigning partitions 
> ({{consumer.assign}}).
> The JavaDocs show as:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(partition0);
>  consumer.assign(partition1);
> Should be:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(Arrays.asList(partition0));
>  consumer.assign(Arrays.asList(partition1));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread Jesse Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Anderson updated KAFKA-2892:
--
Fix Version/s: 0.9.0.0
   Status: Patch Available  (was: Open)

> Consumer Docs Use Wrong Method
> --
>
> Key: KAFKA-2892
> URL: https://issues.apache.org/jira/browse/KAFKA-2892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
> Fix For: 0.9.0.0
>
> Attachments: docspatch.diff
>
>
> The KafkaConsumer docs use a non-existent method for assigning partitions 
> ({{consumer.assign}}).
> The JavaDocs show as:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(partition0);
>  consumer.assign(partition1);
> Should be:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(Arrays.asList(partition0));
>  consumer.assign(Arrays.asList(partition1));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027700#comment-15027700
 ] 

Grant Henke commented on KAFKA-2892:


Hi [~eljefe6aa]. We actually changed the docs patching process to follow the 
same process as contributing code. The docs files live in the /docs directory. 
A sample Github pull request can be seen here: 
https://github.com/apache/kafka/pull/498

The process for contributing can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes

Could you send a Github pull request?

> Consumer Docs Use Wrong Method
> --
>
> Key: KAFKA-2892
> URL: https://issues.apache.org/jira/browse/KAFKA-2892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
> Fix For: 0.9.0.0
>
> Attachments: docspatch.diff
>
>
> The KafkaConsumer docs use a non-existent method for assigning partitions 
> ({{consumer.assign}}).
> The JavaDocs show as:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(partition0);
>  consumer.assign(partition1);
> Should be:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(Arrays.asList(partition0));
>  consumer.assign(Arrays.asList(partition1));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027700#comment-15027700
 ] 

Grant Henke edited comment on KAFKA-2892 at 11/25/15 10:17 PM:
---

Hi [~eljefe6aa]. We actually changed the docs patching process to follow the 
same process as contributing code. The docs files live in the /docs directory. 
A sample Github pull request can be seen here: 
https://github.com/apache/kafka/pull/498

The process for contributing can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
and the docs process is here:
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes

Could you send a Github pull request?


was (Author: granthenke):
Hi [~eljefe6aa]. We actually changed the docs patching process to follow the 
same process as contributing code. The docs files live in the /docs directory. 
A sample Github pull request can be seen here: 
https://github.com/apache/kafka/pull/498

The process for contributing can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes

Could you send a Github pull request?

> Consumer Docs Use Wrong Method
> --
>
> Key: KAFKA-2892
> URL: https://issues.apache.org/jira/browse/KAFKA-2892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
> Fix For: 0.9.0.0
>
> Attachments: docspatch.diff
>
>
> The KafkaConsumer docs use a non-existent method for assigning partitions 
> ({{consumer.assign}}).
> The JavaDocs show as:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(partition0);
>  consumer.assign(partition1);
> Should be:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(Arrays.asList(partition0));
>  consumer.assign(Arrays.asList(partition1));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2718: Prevent temp directory being reuse...

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/583


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-11-25 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2718.
--
   Resolution: Fixed
Fix Version/s: (was: 0.9.0.0)
   0.9.0.1

Issue resolved by pull request 583
[https://github.com/apache/kafka/pull/583]

> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.1
>
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027710#comment-15027710
 ] 

ASF GitHub Bot commented on KAFKA-2718:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/583


> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.1
>
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: initialize Serdes with ProcessorContext

2015-11-25 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/589

MINOR: initialize Serdes with ProcessorContext

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka init_serdes_with_procctx

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/589.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #589


commit 451b7be8bb2c405cebea77536c1d8d5710085507
Author: Yasuhiro Matsuda 
Date:   2015-11-25T22:28:20Z

MINOR: initialize Serdes with ProcessorContext




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: change KStream processor names

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/587


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Support GitHub OAuth tokens in kafka-me...

2015-11-25 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/590

MINOR: Support GitHub OAuth tokens in kafka-merge-pr.py



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KOAuth

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/590.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #590


commit ca55d162a1acdeb559370a0eb3306bc09bee9195
Author: Guozhang Wang 
Date:   2015-11-25T22:54:02Z

v1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Support GitHub OAuth tokens in kafka-me...

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/590


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2894) WorkerSinkTask doesn't handle rewinding offsets on rebalance

2015-11-25 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2894:


 Summary: WorkerSinkTask doesn't handle rewinding offsets on 
rebalance
 Key: KAFKA-2894
 URL: https://issues.apache.org/jira/browse/KAFKA-2894
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Affects Versions: 0.9.0.0
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


rewind() is only invoked at the beginning of each poll(). This means that if a 
rebalance occurs in the poll, it's feasible to get data that doesn't match a 
request to change offsets during the rebalance. I think the consumer will hold 
on to consumer data across the rebalance if it is reassigned the same offset, 
so there may already be data ready to be delivered. Additionally we may already 
have data in an incomplete messageBatch that should be discarded when the 
rewind is requested.

While connectors that care about this (i.e. ones that manage their own offsets) 
can handle this correctly by tracking the offsets they're expecting to see, 
it's a hassle, error prone, an pretty unintuitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2643: Run mirror maker ducktape tests wi...

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/559


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2643) Run mirror maker tests in ducktape with SSL and SASL

2015-11-25 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2643.
-
Resolution: Fixed

Issue resolved by pull request 559
[https://github.com/apache/kafka/pull/559]

> Run mirror maker tests in ducktape with SSL and SASL
> 
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2643) Run mirror maker tests in ducktape with SSL and SASL

2015-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027765#comment-15027765
 ] 

ASF GitHub Bot commented on KAFKA-2643:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/559


> Run mirror maker tests in ducktape with SSL and SASL
> 
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: initialize Serdes with ProcessorContext

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/589


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: comments on KStream methods, and fix ge...

2015-11-25 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/591

MINOR: comments on KStream methods, and fix generics

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka comments

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/591.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #591


commit 37ad9e4a839eb03d5bb508894403399288032223
Author: Yasuhiro Matsuda 
Date:   2015-11-25T23:42:02Z

MINOR: comments on KStream methods, and fix generics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2895) Add certificate authority functionality to TestSslUtils

2015-11-25 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-2895:
---

 Summary: Add certificate authority functionality to TestSslUtils
 Key: KAFKA-2895
 URL: https://issues.apache.org/jira/browse/KAFKA-2895
 Project: Kafka
  Issue Type: Test
  Components: clients, security
Affects Versions: 0.9.0.0
Reporter: Flavio Junqueira


The certificates generated in TestSslUtils are currently self-signed. I suggest 
we simulate the presence of a certificate authority by using the same key to 
sign certificates. This way we won't have to worry about the order we create 
the ssl configuration for clients in integration tests as we do currently (see 
IntegrationTestHarness for an example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: All brokers are running but some partitions' leader is -1

2015-11-25 Thread Qi Xu
Great to know that. Thanks Gwen!

On Wed, Nov 25, 2015 at 12:03 PM, Gwen Shapira  wrote:

> 1. Yes, you can do a rolling upgrade of brokers from 0.8.2 to 0.9.0. The
> important thing is to upgrade the brokers before you upgrade any of the
> clients.
>
> 2. I'm not aware of issues with 0.9.0 and SparkStreaming. However,
> definitely do your own testing to make sure.
>
> On Wed, Nov 25, 2015 at 11:25 AM, Qi Xu  wrote:
>
> > Hi Gwen,
> > Yes, we're going to upgrade the 0.9.0 version. Regarding the upgrade, we
> > definitely don't want to have down time of our cluster.
> > So the upgrade will be machine by machine. Will the release 0.9.0 work
> with
> > the Aug's version together in the same Kafka cluster?
> > Also we currently run spark streaming job (with scala 2.10) against the
> > cluster. Any known issues of 0.9.0 are you aware of under this scenario?
> >
> > Thanks,
> > Tony
> >
> >
> > On Mon, Nov 23, 2015 at 5:41 PM, Gwen Shapira  wrote:
> >
> > > We fixed many many bugs since August. Since we are about to release
> 0.9.0
> > > (with SSL!), maybe wait a day and go with a released and tested
> version.
> > >
> > > On Mon, Nov 23, 2015 at 3:01 PM, Qi Xu  wrote:
> > >
> > > > Forgot to mention is that the Kafka version we're using is from Aug's
> > > > Trunk branch---which has the SSL support.
> > > >
> > > > Thanks again,
> > > > Qi
> > > >
> > > >
> > > > On Mon, Nov 23, 2015 at 2:29 PM, Qi Xu  wrote:
> > > >
> > > >> Loop another guy from our team.
> > > >>
> > > >> On Mon, Nov 23, 2015 at 2:26 PM, Qi Xu  wrote:
> > > >>
> > > >>> Hi folks,
> > > >>> We have a 10 node cluster and have several topics. Each topic has
> > about
> > > >>> 256 partitions with 3 replica factor. Now we run into an issue that
> > in
> > > some
> > > >>> topic, a few partition (< 10)'s leader is -1 and all of them has
> only
> > > one
> > > >>> synced partition.
> > > >>>
> > > >>> From the Kafka manager, here's the snapshot:
> > > >>> [image: Inline image 2]
> > > >>>
> > > >>> [image: Inline image 1]
> > > >>>
> > > >>> here's the state log:
> > > >>> [2015-11-23 21:57:58,598] ERROR Controller 1 epoch 435499 initiated
> > > >>> state change for partition [userlogs,84] from OnlinePartition to
> > > >>> OnlinePartition failed (state.change.logger)
> > > >>> kafka.common.StateChangeFailedException: encountered error while
> > > >>> electing leader for partition [userlogs,84] due to: Preferred
> replica
> > > 0 for
> > > >>> partition [userlogs,84] is either not alive or not in the isr.
> > Current
> > > >>> leader and ISR: [{"leader":-1,"leader_epoch":203,"isr":[1]}].
> > > >>> Caused by: kafka.common.StateChangeFailedException: Preferred
> > replica 0
> > > >>> for partition [userlogs,84] is either not alive or not in the isr.
> > > Current
> > > >>> leader and ISR: [{"leader":-1,"leader_epoch":203,"isr":[1]}]
> > > >>>
> > > >>> My question is:
> > > >>> 1) how could this happen and how can I fix it or work around it?
> > > >>> 2) Is 256 partitions too big? We have about 200+ cores for spark
> > > >>> streaming job.
> > > >>>
> > > >>> Thanks,
> > > >>> Qi
> > > >>>
> > > >>>
> > > >>
> > > >
> > >
> >
>


Build failed in Jenkins: kafka_0.9.0_jdk7 #44

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2718: Prevent temp directory being reused in parallel test runs

[junrao] KAFKA-2878; Guard against OutOfMemory in Kafka broker

--
[...truncated 1862 lines...]
jdk1.7.0_51/jre/lib/locale/it/
jdk1.7.0_51/jre/lib/locale/it/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/it/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/zh_TW/
jdk1.7.0_51/jre/lib/locale/zh_TW/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/zh_TW/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/zh/
jdk1.7.0_51/jre/lib/locale/zh/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/zh/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/ja/
jdk1.7.0_51/jre/lib/locale/ja/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/ja/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/fr/
jdk1.7.0_51/jre/lib/locale/fr/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/fr/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/ko/
jdk1.7.0_51/jre/lib/locale/ko/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/ko/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/ko.UTF-8/
jdk1.7.0_51/jre/lib/locale/ko.UTF-8/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/ko.UTF-8/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/zh.GBK/
jdk1.7.0_51/jre/lib/locale/zh.GBK/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/zh.GBK/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/sv/
jdk1.7.0_51/jre/lib/locale/sv/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/sv/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/zh_HK.BIG5HK/
jdk1.7.0_51/jre/lib/locale/zh_HK.BIG5HK/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/zh_HK.BIG5HK/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/locale/es/
jdk1.7.0_51/jre/lib/locale/es/LC_MESSAGES/
jdk1.7.0_51/jre/lib/locale/es/LC_MESSAGES/sunw_java_plugin.mo
jdk1.7.0_51/jre/lib/fonts/
jdk1.7.0_51/jre/lib/fonts/LucidaBrightDemiBold.ttf
jdk1.7.0_51/jre/lib/fonts/LucidaBrightItalic.ttf
jdk1.7.0_51/jre/lib/fonts/LucidaSansDemiBold.ttf
jdk1.7.0_51/jre/lib/fonts/LucidaTypewriterRegular.ttf
jdk1.7.0_51/jre/lib/fonts/LucidaBrightDemiItalic.ttf
jdk1.7.0_51/jre/lib/fonts/LucidaTypewriterBold.ttf
jdk1.7.0_51/jre/lib/fonts/LucidaSansRegular.ttf
jdk1.7.0_51/jre/lib/fonts/fonts.dir
jdk1.7.0_51/jre/lib/fonts/LucidaBrightRegular.ttf
jdk1.7.0_51/jre/lib/ext/
jdk1.7.0_51/jre/lib/ext/sunjce_provider.jar
jdk1.7.0_51/jre/lib/ext/meta-index
jdk1.7.0_51/jre/lib/ext/zipfs.jar
jdk1.7.0_51/jre/lib/ext/sunpkcs11.jar
jdk1.7.0_51/jre/lib/ext/dnsns.jar
jdk1.7.0_51/jre/lib/ext/localedata.jar
jdk1.7.0_51/jre/lib/ext/sunec.jar
jdk1.7.0_51/jre/lib/fontconfig.Turbo.bfc
jdk1.7.0_51/jre/lib/fontconfig.properties.src
jdk1.7.0_51/jre/lib/jce.jar
jdk1.7.0_51/jre/lib/fontconfig.RedHat.6.bfc
jdk1.7.0_51/jre/lib/currency.data
jdk1.7.0_51/jre/lib/javafx.properties
jdk1.7.0_51/jre/THIRDPARTYLICENSEREADME.txt
jdk1.7.0_51/jre/Welcome.html
jdk1.7.0_51/jre/LICENSE
jdk1.7.0_51/jre/plugin/
jdk1.7.0_51/jre/plugin/desktop/
jdk1.7.0_51/jre/plugin/desktop/sun_java.png
jdk1.7.0_51/jre/plugin/desktop/sun_java.desktop
jdk1.7.0_51/jre/COPYRIGHT
jdk1.7.0_51/jre/THIRDPARTYLICENSEREADME-JAVAFX.txt
jdk1.7.0_51/LICENSE
jdk1.7.0_51/COPYRIGHT
jdk1.7.0_51/THIRDPARTYLICENSEREADME-JAVAFX.txt
Building remotely on jenkins-ubuntu-1404-4gb-8ee (jenkins-cloud-4GB cloud-slave 
Ubuntu ubuntu) in workspace 
Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision 33b4d3c4cc1f449f91a74bd589b1c90c60ddc71b 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 33b4d3c4cc1f449f91a74bd589b1c90c60ddc71b
 > git rev-list c89b6f6a5e8dca191c8fcf3042d7d7f868b64f80 # timeout=10
Unpacking http://services.gradle.org/distributions/gradle-2.4-rc-2-bin.zip to 
/jenkins/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2 on 
jenkins-ubuntu-1404-4gb-8ee
ERROR: Failed to download 
http://service

[GitHub] kafka pull request: MINOR: comments on KStream methods, and fix ge...

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/591


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #186

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2718: Prevent temp directory being reused in parallel test runs

[wangguoz] MINOR: change KStream processor names

[cshapi] MINOR: Support GitHub OAuth tokens in kafka-merge-pr.py

[cshapi] KAFKA-2643: Run mirror maker ducktape tests with SSL and SASL

[wangguoz] MINOR: initialize Serdes with ProcessorContext

--
[...truncated 669 lines...]
kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribu

[jira] [Created] (KAFKA-2896) System test for partition re-assignment

2015-11-25 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2896:
---

 Summary: System test for partition re-assignment
 Key: KAFKA-2896
 URL: https://issues.apache.org/jira/browse/KAFKA-2896
 Project: Kafka
  Issue Type: Task
Reporter: Gwen Shapira


Lots of users depend on partition re-assignment tool to manage their cluster. 

Will be nice to have a simple system tests that creates a topic with few 
partitions and few replicas, reassigns everything and validates the ISR 
afterwards. 

Just to make sure we are not breaking anything. Especially since we have plans 
to improve (read: modify) this area.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Improve broker id documentation

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/585


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #857

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2718: Prevent temp directory being reused in parallel test runs

[wangguoz] MINOR: change KStream processor names

[cshapi] MINOR: Support GitHub OAuth tokens in kafka-merge-pr.py

[cshapi] KAFKA-2643: Run mirror maker ducktape tests with SSL and SASL

[wangguoz] MINOR: initialize Serdes with ProcessorContext

--
[...truncated 2766 lines...]
kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup FAILED
java.net.BindException: Address already in use

java.lang.NullPointerException
at 
kafka.integration.BaseTopicMetadataTest.tearDown(BaseTopicMetadataTest.scala:63)
at 
kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$tearDown(SaslPlaintextTopicMetadataTest.scala:23)
at kafka.api.SaslTestHarness$class.tearDown(SaslTestHarness.scala:73)
at 
kafka.integration.SaslPlaintextTopicMetadataTest.tearDown(SaslPlaintextTopicMetadataTest.scala:23)

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testDoublyLinkedList PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.Replication

Build failed in Jenkins: kafka-trunk-jdk8 #187

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: comments on KStream methods, and fix generics

[cshapi] MINOR: Improve broker id documentation

--
[...truncated 3707 lines...]

org.apache.kafka.common.record.RecordTest > testFields[28] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[29] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[29] PASSED

org.apache.kafka.common.record.RecordTest > testFields[29] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[30] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[30] PASSED

org.apache.kafka.common.record.RecordTest > testFields[30] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[31] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[31] PASSED

org.apache.kafka.common.record.RecordTest > testFields[31] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[32] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[32] PASSED

org.apache.kafka.common.record.RecordTest > testFields[32] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[33] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[33] PASSED

org.apache.kafka.common.record.RecordTest > testFields[33] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[34] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[34] PASSED

org.apache.kafka.common.record.RecordTest > testFields[34] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[35] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[35] PASSED

org.apache.kafka.common.record.RecordTest > testFields[35] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[36] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[36] PASSED

org.apache.kafka.common.record.RecordTest > testFields[36] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[37] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[37] PASSED

org.apache.kafka.common.record.RecordTest > testFields[37] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[38] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[38] PASSED

org.apache.kafka.common.record.RecordTest > testFields[38] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[39] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[39] PASSED

org.apache.kafka.common.record.RecordTest > testFields[39] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[40] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[40] PASSED

org.apache.kafka.common.record.RecordTest > testFields[40] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[41] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[41] PASSED

org.apache.kafka.common.record.RecordTest > testFields[41] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[42] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[42] PASSED

org.apache.kafka.common.record.RecordTest > testFields[42] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[43] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[43] PASSED

org.apache.kafka.common.record.RecordTest > testFields[43] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[44] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[44] PASSED

org.apache.kafka.common.record.RecordTest > testFields[44] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[45] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[45] PASSED

org.apache.kafka.common.record.RecordTest > testFields[45] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[46] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[46] PASSED

org.apache.kafka.common.record.RecordTest > testFields[46] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[47] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[47] PASSED

org.apache.kafka.common.record.RecordTest > testFields[47] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[48] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[48] PASSED

org.apache.kafka.common.record.RecordTest > testFields[48] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[49] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[49] PASSED

org.apache.kafka.common.record.RecordTest > testFields[49] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[50] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[50] PASSED

org.apache.kafka.common.record.RecordTest > testFields[50] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[51] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[51] PASSED

org.apache.kafka.comm

Build failed in Jenkins: kafka-trunk-jdk7 #858

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: comments on KStream methods, and fix generics

[cshapi] MINOR: Improve broker id documentation

--
[...truncated 4894 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > te

[jira] [Commented] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15028148#comment-15028148
 ] 

ASF GitHub Bot commented on KAFKA-2892:
---

GitHub user eljefe6a opened a pull request:

https://github.com/apache/kafka/pull/592

KAFKA-2892 Consumer Docs Use Wrong Method

The KafkaConsumer docs use a non-existent method for assigning partitions 
(consumer.assign).

The JavaDocs show as:
```
String topic = "foo";
TopicPartition partition0 = new TopicPartition(topic, 0);
TopicPartition partition1 = new TopicPartition(topic, 1);
consumer.assign(partition0);
consumer.assign(partition1);
```

Should be:
```
String topic = "foo";
TopicPartition partition0 = new TopicPartition(topic, 0);
TopicPartition partition1 = new TopicPartition(topic, 1);
consumer.assign(Arrays.asList(partition0, partition1));
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eljefe6a/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/592.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #592


commit 3a9bb6b28771634481eb4edc6c8031b1361c3f25
Author: Jesse Anderson 
Date:   2015-11-26T04:38:44Z

KAFKA-2892 Consumer Docs Use Wrong Method




> Consumer Docs Use Wrong Method
> --
>
> Key: KAFKA-2892
> URL: https://issues.apache.org/jira/browse/KAFKA-2892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
> Fix For: 0.9.0.0
>
> Attachments: docspatch.diff
>
>
> The KafkaConsumer docs use a non-existent method for assigning partitions 
> ({{consumer.assign}}).
> The JavaDocs show as:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(partition0);
>  consumer.assign(partition1);
> Should be:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(Arrays.asList(partition0));
>  consumer.assign(Arrays.asList(partition1));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2892 Consumer Docs Use Wrong Method

2015-11-25 Thread eljefe6a
GitHub user eljefe6a opened a pull request:

https://github.com/apache/kafka/pull/592

KAFKA-2892 Consumer Docs Use Wrong Method

The KafkaConsumer docs use a non-existent method for assigning partitions 
(consumer.assign).

The JavaDocs show as:
```
String topic = "foo";
TopicPartition partition0 = new TopicPartition(topic, 0);
TopicPartition partition1 = new TopicPartition(topic, 1);
consumer.assign(partition0);
consumer.assign(partition1);
```

Should be:
```
String topic = "foo";
TopicPartition partition0 = new TopicPartition(topic, 0);
TopicPartition partition1 = new TopicPartition(topic, 1);
consumer.assign(Arrays.asList(partition0, partition1));
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eljefe6a/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/592.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #592


commit 3a9bb6b28771634481eb4edc6c8031b1361c3f25
Author: Jesse Anderson 
Date:   2015-11-26T04:38:44Z

KAFKA-2892 Consumer Docs Use Wrong Method




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2843) when consumer got empty messageset, fetchResponse.highWatermark != current_offset?

2015-11-25 Thread netcafe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15028229#comment-15028229
 ] 

netcafe commented on KAFKA-2843:


I did, no result.

> when consumer got empty messageset, fetchResponse.highWatermark != 
> current_offset?
> --
>
> Key: KAFKA-2843
> URL: https://issues.apache.org/jira/browse/KAFKA-2843
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: netcafe
>
> I use simple consumer fetch message from brokers (fetchSize > 
> messageSize),when consumer got empty messageSet,e.g :
> val offset = nextOffset
> val request = buildRequest(offset)
> val response = consumer.fetch(request)
> val msgSet = fetchResponse.messageSet(topic, partition)
> 
>   if (msgSet.isEmpty) {
>   val hwOffset = fetchResponse.highWatermark(topic, partition)
>   
>   if (offset == hwOffset) {
>// ok, doSomething...
>   } else {  
>  // in our scene, i found highWatermark may not equals current offset 
> ,but we did not reproduced it.
>   // Is this case could happen ?  if could, why ?
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15028260#comment-15028260
 ] 

ASF GitHub Bot commented on KAFKA-2892:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/592


> Consumer Docs Use Wrong Method
> --
>
> Key: KAFKA-2892
> URL: https://issues.apache.org/jira/browse/KAFKA-2892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
> Fix For: 0.9.0.0
>
> Attachments: docspatch.diff
>
>
> The KafkaConsumer docs use a non-existent method for assigning partitions 
> ({{consumer.assign}}).
> The JavaDocs show as:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(partition0);
>  consumer.assign(partition1);
> Should be:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(Arrays.asList(partition0));
>  consumer.assign(Arrays.asList(partition1));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2892 Consumer Docs Use Wrong Method

2015-11-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/592


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2892) Consumer Docs Use Wrong Method

2015-11-25 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2892:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 592
[https://github.com/apache/kafka/pull/592]

> Consumer Docs Use Wrong Method
> --
>
> Key: KAFKA-2892
> URL: https://issues.apache.org/jira/browse/KAFKA-2892
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
> Fix For: 0.9.0.0, 0.9.1.0
>
> Attachments: docspatch.diff
>
>
> The KafkaConsumer docs use a non-existent method for assigning partitions 
> ({{consumer.assign}}).
> The JavaDocs show as:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(partition0);
>  consumer.assign(partition1);
> Should be:
>  String topic = "foo";
>  TopicPartition partition0 = new TopicPartition(topic, 0);
>  TopicPartition partition1 = new TopicPartition(topic, 1);
>  consumer.assign(Arrays.asList(partition0));
>  consumer.assign(Arrays.asList(partition1));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #45

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2892 Consumer Docs Use Wrong Method

--
[...truncated 214 lines...]
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
14 warnings found
:core:processResources UP-TO-DATE
:core:classes
:clients:compileTestJava UP-TO-DATE
:clients:processTestResources UP-TO-DATE
:clients:testClasses UP-TO-DATE
:core:copyDependantLibs UP-TO-DATE
:core:copyDependantTestLibs
:core:jar UP-TO-DATE
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:jar
:log4j-appender:compileJava
:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes
:log4j-appender:jar
:tools:compileJava
:tools:processResources UP-TO-DATE
:tools:classes
:clients:javadoc
:clients:javadocJar
:clients:srcJar
:clients:testJar
:clients:signArchives SKIPPED
:tools:copyDependantLibs
:tools:jar
:connect:api:compileJavaNote: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:api:processResources UP-TO-DATE
:connect:api:classes
:connect:api:copyDependantLibs
:connect:api:jar
:connect:file:compileJava
:connect:file:processResources UP-TO-DATE
:connect:file:classes
:connect:file:copyDependantLibs
:connect:file:jar
:connect:json:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:connect:json:processResources UP-TO-DATE
:connect:json:classes
:connect:json:copyDependantLibs
:connect:json:jar
:connect:runtime:compileJavaNote: Some input files use unchecked or unsafe 
operations.
Note: Recompile with -Xlint:unchecked for details.

:connect:runtime:processResources UP-TO-DATE
:connect:runtime:classes
:connect:runtime:copyDependantLibs
:connect:runtime:jar
:jarAll
:test_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJava UP-TO-DATE
:kafka_0.9.0_jdk7:clients:processResources UP-TO-DATE
:kafka_0.9.0_jdk7:clients:classes UP-TO-DATE
:kafka_0.9.0_jdk7:clients:determineCommitId UP-TO-DATE
:kafka_0.9.0_jdk7:clients:createVersionFile
:kafka_0.9.0_jdk7:clients:jar UP-TO-DATE
:kafka_0.9.0_jdk7:clients:compileTestJava UP-TO-DATE
:kafka_0.9.0_jdk7:clients:processTestResources UP-TO-DATE
:kafka_0.9.0_jdk7:clients:testClasses UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileJava UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileScala UP-TO-DATE
:kafka_0.9.0_jdk7:core:processResources UP-TO-DATE
:kafka_0.9.0_jdk7:core:classes UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileTestJava UP-TO-DATE
:kafka_0.9.0_jdk7:core:compileTestScala
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileTestScala' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of input files 
for task 'compileTestScala' during up-to-date check.  See stacktrace for 
details.
at 
org.gradle.api.internal.changedetection.rules.TaskUpToDateState.(TaskUpToDateState.java:59)
at 
org.gradle.api.internal.changedetection.changes.DefaultTaskArtifactStateRepository$TaskArtifactStateImpl.getStates(DefaultTaskArtifactStateRepository.java:126)
at 
org.gradle.api.internal.changedetection.changes.DefaultTaskArtifactStateRepository$TaskArtifactStateImpl.isUpToDate(DefaultTaskArtifactStateRepository.java:69)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:52)
at 
o