[GitHub] kafka pull request: KAFKA-3597: Query ConsoleConsumer and Verifiab...

2016-04-27 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/1278

KAFKA-3597: Query ConsoleConsumer and VerifiableProducer if they shutdown 
cleanly

Even if a test calls stop() on console_consumer or verifiable_producer, it 
is still possible that producer/consumer will not shutdown cleanly, and will be 
killed forcefully after a timeout. It will be useful for some tests to know 
whether a clean shutdown happened or not. This PR adds methods to 
console_consumer and verifiable_producer to query whether clean shutdown 
happened or not.

@hachikuji and/or @granders Please review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3597

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1278.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1278


commit 501ae1ad97b6b77a2078ed234afe9f828ecd
Author: Anna Povzner 
Date:   2016-04-27T22:56:43Z

KAFKA-3597: Enable query ConsoleConsumer and VerifiableProducer if they 
shutdown cleanly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2825, KAFKA-2851: Controller failover te...

2016-04-25 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/570


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: ensure original use of prop_file in ver...

2016-04-05 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/1192

MINOR: ensure original use of prop_file in verifiable producer

This PR: https://github.com/apache/kafka/pull/958 fixed the use of 
prop_file in the situation when we have multiple producers (before, every 
producer will add to the config). However, it assumes that self.prop_file is 
initially "". This is correct for all existing tests, but it precludes us from 
extending verifiable producer and adding more properties to the producer config 
(same as console consumer). This is a small PR to change the behavior to the 
original, but keep the fix for multiple producers in verifiable producer.

@granders please review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka fix_verifiable_producer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1192.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1192


commit 1ff47c5f64ca22b28b476142a92d7a68966505f5
Author: Anna Povzner 
Date:   2016-04-06T00:45:33Z

MINOR: ensure original use of prop_file in verifiable producer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3303: Pass partial record metadata to Pr...

2016-03-04 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/1015

KAFKA-3303: Pass partial record metadata to 
ProducerInterceptor.onAcknowledgement on error

This is a KIP-42 followup. 

Currently, If sending the record fails before it gets to the server, 
ProducerInterceptor.onAcknowledgement() is called with metadata == null, and 
non-null exception. However, it is useful to pass topic and partition, if 
known, to ProducerInterceptor.onAcknowledgement() as well. This patch ensures 
that  ProducerInterceptor.onAcknowledgement()  gets record metadata with topic 
and maybe partition. If partition is not set in 'record' and 
KafkaProducer.send() fails before partition gets assigned, then 
ProducerInterceptor.onAcknowledgement() gets RecordMetadata with partition == 
-1. Only time when  ProducerInterceptor.onAcknowledgement() gets null record 
metadata is when the client passes null record to KafkaProducer.send().

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kip42-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1015


commit 169085a6b502d9458f477030cd6045f20b0100a7
Author: Anna Povzner 
Date:   2016-03-05T01:05:56Z

KAFKA-3303: Pass partial record metadata to Interceptor onAcknowledgement 
in case of errors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3201: Added rolling upgrade system tests...

2016-02-26 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/980

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and 
message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version 
and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and 
message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and 
message.format.version = 0.9

Plus couple of variations of these tests using old/new consumer or no 
compression / snappy compression. 

Also added optional extra verification to ProduceConsumeValidate test to 
verify that all acks received by producer are successful. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3201-02

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/980.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #980


commit 35e7362e316419675cf6614787fcc2d12fae6e74
Author: Anna Povzner 
Date:   2016-02-26T22:09:14Z

KAFKA-3201: Added rolling upgrade system tests from 0.8 and 0.9 to 0.10

commit 208a50458ecff8ef1bf9b601c1162e796ad7de28
Author: Anna Povzner 
Date:   2016-02-26T22:59:22Z

Upgrade system tests ensure all producer acks are successful

commit dce6ff016c575aae30587c92f71159886158972c
Author: Anna Povzner 
Date:   2016-02-26T23:18:37Z

Using one producer in upgrade test, because --prefixValue is only supported 
in verifiable producer in trunk




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3214: Added system tests for compressed ...

2016-02-23 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/958

KAFKA-3214: Added system tests for compressed topics

Added the following tests:
1. Extended TestVerifiableProducer (sanity check test) to test Trunk with 
snappy compression (one producer/one topic).
2. Added CompressionTest that tests 3 producers: 2a) each uses a different 
compression; 2b) each either uses snappy compression or no compression.

Enabled VerifiableProducer to run producers with different compression 
types (passed in the constructor).



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/958.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #958


commit 588d4caf3f8830dfcc185da30dfdb40de04cd7cd
Author: Anna Povzner 
Date:   2016-02-23T22:22:34Z

KAFKA-3214: Added system tests for compressed topics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3196: Added checksum and size to RecordM...

2016-02-22 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/951

KAFKA-3196: Added checksum and size to RecordMetadata and 
ConsumerRecordetadata and ConsumerRecord

This is the second (remaining) part of KIP-42. See 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3196

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/951.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #951


commit ce10691e621a74070243c16dc8c0aa5ada531c72
Author: Anna Povzner 
Date:   2016-02-22T23:49:31Z

KAFKA-3196: KIP-42 (part 2) Added checksum and record size to 
RecordMetadata and ConsumerRecord




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3162: Added producer and consumer interc...

2016-02-02 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/854

KAFKA-3162: Added producer and consumer interceptors

This is the most of the KIP-42: Producer and consumer interceptor. (Except 
exposing CRC and record sizes to the interceptor, which is coming as a separate 
PR; tracked by KAFKA-3196).

This PR includes:
1. Add ProducerInterceptor interface and call its callbacks from 
appropriate places in Kafka Producer.
2. Add ConsumerInterceptor interface and call its callbacks from 
appropriate places in Kafka Consumer.
3. Add unit tests for interceptor changes
4. Add integration test for both mutable consumer and producer interceptors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kip42

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/854.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #854


commit d851ab68fadfff4f80318251cdcb4caf2097e161
Author: Anna Povzner 
Date:   2016-02-03T00:40:55Z

KAFKA-3162 Added producer and consumer interceptors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: POC Producer Interceptor and simple C3 impleme...

2016-01-12 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/760


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: POC Producer Interceptor and simple C3 impleme...

2016-01-12 Thread apovzner
GitHub user apovzner reopened a pull request:

https://github.com/apache/kafka/pull/760

POC Producer Interceptor and simple C3 implementation of producer 
interceptor.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka interceptor-kip

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/760.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #760


commit 239eabd2433ee97bc04e1e8698e7d875ee4059c3
Author: Anna Povzner 
Date:   2016-01-13T00:33:20Z

POC Producer Interceptor and simple C3 implementation of producer 
interceptor.

commit 8eeab971f8d472891aac1371c0528505732920c9
Author: Anna Povzner 
Date:   2016-01-13T01:03:08Z

Removing files just added with the last commit.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: POC Producer Interceptor and simple C3 impleme...

2016-01-12 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/760


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: POC Producer Interceptor and simple C3 impleme...

2016-01-12 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/760

POC Producer Interceptor and simple C3 implementation of producer 
interceptor.

PR is for code review only (pre-KIP POC code).

What's missing:
1. We will specify all interceptor classes in config and load them in 
specified order. 
2. ProducerInterceptor API is still up for discussion --  we are currently 
allowing for the interceptor to modify serialized key and serialized value. 
That's why the order how we stack interceptors is important. This behavior is 
not required for recording audit metrics, but could be useful for other 
interceptor use-cases such as message encryption.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka interceptor-kip

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/760.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #760


commit 239eabd2433ee97bc04e1e8698e7d875ee4059c3
Author: Anna Povzner 
Date:   2016-01-13T00:33:20Z

POC Producer Interceptor and simple C3 implementation of producer 
interceptor.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2896 Added system test for partition re-...

2015-12-09 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/655

KAFKA-2896 Added system test for partition re-assignment

Partition re-assignment tests with and without broker failure.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2896

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/655.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #655


commit bddca8055a70ccc4385e7898fd6ff2eb38db
Author: Anna Povzner 
Date:   2015-12-10T01:06:11Z

KAFKA-2896 Added system test for partition re-assignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2825: Add controller failover to existin...

2015-12-02 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/618

KAFKA-2825: Add controller failover to existing replication tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2825_01

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/618.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #618


commit fa0b4156d209522b1fe7656f73bb2792d8c932b3
Author: Anna Povzner 
Date:   2015-12-02T22:38:20Z

KAFKA-2825: Add controller failover to existing replication tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2851 Using random dir under /temp for lo...

2015-12-02 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/609


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-1851 Using random file names for local k...

2015-12-01 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/610

KAFKA-1851 Using random file names for local kdc files to avoid conflicts.

I originally tried to solve the problem by using tempfile, and creating and 
using scp() utility method that created a random local temp file every time it 
was called. However, it required passing miniKdc object to SecurityConfig 
setup_node which looked very invasive, since many tests use this method. Here 
is the PR for that, which I think we will close: 
https://github.com/apache/kafka/pull/609

This change is the least invasive change to solve conflicts between 
multiple tests jobs. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2851_01

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/610.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #610


commit 4c9c76825b9dff5fb509eac01592d37f357b5775
Author: Anna Povzner 
Date:   2015-12-02T01:55:08Z

KAFKA-2851:  Using random file names for local kdc files to avoid conflicts




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-1851 Using random dir under /temp for lo...

2015-12-01 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/609

KAFKA-1851 Using random dir under /temp for local kdc files to avoid 
conflicts.

when multiple test jobs are running.

I manually separated changes for KAFKA-2851 from this PR:  
https://github.com/apache/kafka/pull/570 which also had KAFKA-2825 changes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-2851

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/609.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #609


commit d19a8533243a77f026a4f547bad40fd10dd68745
Author: Anna Povzner 
Date:   2015-12-02T00:02:13Z

KAFKA-1851 Using random dir under /temp for local kdc files to avoid 
conflicts when multiple test jobs are running.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2825, KAFKA-2851: Extended existing duck...

2015-11-20 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/518


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2825, KAFKA-2852: Controller failover te...

2015-11-20 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/570

KAFKA-2825, KAFKA-2852: Controller failover tests added to ducktape 
replication tests and fix to temp dir

I closed an original pull request that contained previous comments by Geoff 
(which are already addressed here), because I got into bad rebase situation. 
So, I created a new branch and cherry-picked my commits + merged with Ben's 
changes to fix MiniKDC tests to run on Virtual Box. That change was conflicting 
with my changes, where I was copying MiniKDC files with new scp method, and 
temp file was created inside that method. To merge Ben's changes, I added two 
optional parameters to scp(): 'pattern' and 'subst' to optionally substitute 
string while spp'ing files, which is needed for krb5.conf file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-2825

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/570.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #570


commit 44f7a6ea9fbfc11e6eebc44b5bfab36431469522
Author: Anna Povzner 
Date:   2015-11-11T22:43:49Z

Extended existing ducktape replication tests to include controller failover

commit 447e330c840e9a556aff2d05d54f873aee8641a5
Author: Anna Povzner 
Date:   2015-11-12T04:12:43Z

Using random dir under /temp for local kdc files to avoid conflicts.

commit a364a3f289277579f17973829a3d51a97749875c
Author: Anna Povzner 
Date:   2015-11-12T21:40:32Z

Fixed usage of random dir under local /tmp for miniKdc files

commit 5b2c048d1c0922b8d36286d96c90bae53c01a671
Author: Anna Povzner 
Date:   2015-11-12T21:48:47Z

Fix to make sure that temp dir for mini kcd files is removed after test 
finishes.

commit 18c8670e18cfdb641efe14df3c2479ba7e340e0d
Author: Anna Povzner 
Date:   2015-11-17T19:17:05Z

KAFKA-2825 Moved query zookeeper method from KafkaService to 
ZookeeperService

commit d70d1eb17fa1ee54dadd52d55ba3e89a81b37c0d
Author: Anna Povzner 
Date:   2015-11-17T21:54:04Z

KAFKA-2851 Added scp method to remote_account utils to scp between two 
remote nodes through unique local temp file

commit 34830cd67e00f7b28d1afcffe9653fc543bca9d0
Author: Anna Povzner 
Date:   2015-11-17T21:59:02Z

KAFKA-2851: minor fix to format string in utils.remote_coount.scp method

commit e5a28e3e2718e057138043a7d6cbc9d42c17d84e
Author: Anna Povzner 
Date:   2015-11-18T02:56:57Z

KAFKA-2851: clean up temp file even if scp fails

commit bed5a2f1f75462d857fc44382322544e6adc2bb2
Author: Anna Povzner 
Date:   2015-11-18T18:17:09Z

KAFKA-2825 Using only PLAINTEXT and SASL_SSL security protocols for 
controller failover tests

commit 84d21b6ac1324f7ac2bbc0f908d7a218e95501d5
Author: Anna Povzner 
Date:   2015-11-20T21:49:49Z

Merged with Ben's changes to make MiniKFC tests to run on Virtual Box

commit f0d630907f4eaf14e7274e9075022d949e5b2752
Author: Anna Povzner 
Date:   2015-11-20T21:53:43Z

Very minor changes: typo in output string and some white spaces.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2825: Extended existing ducktape replica...

2015-11-12 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/518

KAFKA-2825: Extended existing ducktape replication tests to include 
controller failover



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-86

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/518.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #518


commit b084c876bfd5d2214640aa75767c6935519ae2db
Author: Anna Povzner 
Date:   2015-11-11T22:43:49Z

Extended existing ducktape replication tests to include controller failover

commit 3bf3b88256c7fb9e13776ed7ab17c61c017b58d7
Author: Anna Povzner 
Date:   2015-11-12T04:12:43Z

Using random dir under /temp for local kdc files to avoid conflicts.

commit 8f210d479bdcf71fb5cb7a3bdebb6a969a7a453e
Author: Anna Povzner 
Date:   2015-11-12T21:40:32Z

Fixed usage of random dir under local /tmp for miniKdc files

commit 7bb75427709810736afe213e83c33fb98d2f6c5a
Author: Anna Povzner 
Date:   2015-11-12T21:48:47Z

Fix to make sure that temp dir for mini kcd files is removed after test 
finishes.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2769: Multi-consumer integration tests f...

2015-11-09 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/472

KAFKA-2769:  Multi-consumer integration tests for consumer assignment incl. 
session timeouts and corresponding fixes

-- Refactored multi-consumer integration group assignment validation tests 
for round-robin assignment
-- Added multi-consumer integration tests for session timeout expiration:
   1. When a consumer stops polling
2. When a consumer calls close()
-- Fixes to issues found with session timeout expiration tests woth help 
from Jason Gustafson: Try to avoid  SendFailedException exception by cancelling 
the scheduled tasks and ensuring metadata update before sending group leave 
requests + send leave group request with retries.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-81

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/472.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #472


commit 0dc57aa7bed768559da19707e672dfbb55c0460b
Author: Anna Povzner 
Date:   2015-11-06T04:02:55Z

KAFKA-2769: Multi-consumer integration tests for consumer group subsribe 
and timeouts and consumer timeout fixes.

-- Refactored multi-consumer integration group assignment validation tests 
for round-robin assignment
-- Added multi-consumer integration tests for session timeout expiration
-- Fixes to issues found with session timeout expiration tests woth help 
from Jason Gustafson.
1. shouldKeepMemberAlive(): when we are in the sync phase, we do want 
to expire members if we don't get any response
2. Try to avoid  SendFailedException exception by cancelling the 
scheduled tasks and ensuring metadata update before sending group leave 
requests.

commit 87554675c0ad5e51602c4cc37a40a9bb273b6dd0
Author: Anna Povzner 
Date:   2015-11-07T00:10:04Z

KAFKA-2769: More reliable sending of leave group request in the consumer on 
consumer close().

commit 353ef1c079f39ad15a91ba26f64f95c541d517f1
Author: Anna Povzner 
Date:   2015-11-09T19:56:36Z

Reverted a change in shouldKeepMemberAlive() to check for sync callback set

commit 83f703d6cb8720ac12e25692ce273e15813e89f6
Author: Anna Povzner 
Date:   2015-11-09T20:52:15Z

fixed minor test issues




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Do not merge: Multi-consumer integration tests...

2015-11-09 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/440


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Do not merge: Multi-consumer integration tests...

2015-11-05 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/440

Do not merge: Multi-consumer integration tests for consumer group subsribe 
and timeouts and corresponding fixes

A subset of changes for CPKAFKA-81 for initial review.

-- Refactored multi-consumer integration group assignment validation tests 
for round-robin assignment
-- Added multi-consumer integration tests for session timeout expiration
-- Fixes to issues found with session timeout expiration tests woth help 
from Jason Gustafson.
1. shouldKeepMemberAlive(): when we are in the sync phase, we do want 
to expire members if we don't get any response
2. Try to avoid  SendFailedException exception by cancelling the 
scheduled tasks and ensuring metadata update before sending group leave 
requests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka81_01

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/440.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #440


commit e00a864543e97c12e1f40d1f2317525f6f38a1be
Author: Anna Povzner 
Date:   2015-11-06T04:02:55Z

Multi-consumer integration tests for consumer group subsribe and timeouts 
and consumer timeout fixes.

-- Refactored multi-consumer integration group assignment validation tests 
for round-robin assignment
-- Added multi-consumer integration tests for session timeout expiration
-- Fixes to issues found with session timeout expiration tests woth help 
from Jason Gustafson.
1. shouldKeepMemberAlive(): when we are in the sync phase, we do want 
to expire members if we don't get any response
2. Try to avoid  SendFailedException exception by cancelling the 
scheduled tasks and ensuring metadata update before sending group leave 
requests.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2737: Added single- and multi-consumer i...

2015-11-03 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/413

KAFKA-2737: Added single- and multi-consumer integration tests for 
round-robin assignment

Two tests:
1. One consumer subscribes to 2 topics, each with 2 partitions; includes 
adding and removing a topic.
2. Several consumers subscribe to 2 topics, several partition each; 
includes adding one more consumer after initial assignment is done and verified.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-76

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/413.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #413


commit 6e3a74863b50162bed338e6719af0ddd13109268
Author: Anna Povzner 
Date:   2015-11-04T00:28:25Z

KAFKA-2737: Added single- and multi-consumer integration tests for 
round-robin assignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2714: Added integration tests for except...

2015-10-30 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/393

KAFKA-2714: Added integration tests for exceptional cases in fetching



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-84

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #393


commit 4f175d813270ae2943dd4466c51bedbf11e819ea
Author: Anna Povzner 
Date:   2015-10-29T22:21:01Z

MINOR: Added integration tests for exceptional cases in fetching

commit 6aa6aaf4b4f3ebb4f101fc0a5088195d52a2a0ba
Author: Anna Povzner 
Date:   2015-10-30T00:00:50Z

MINOR: Checking correct values in exceptions thrown in integration tests 
for exceptional cases in fetching




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2714: Added integration tests for except...

2015-10-30 Thread apovzner
Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/384


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Added integration tests for exceptional...

2015-10-29 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/384

MINOR: Added integration tests for exceptional cases in fetching

1. When reset policy is NONE, verify that NoOffsetForPartitionException is 
thrown if no initial position is set. Verify that OffsetOutOfRange is thrown if 
you seek out of range.
2. Verify RecordTooLargeException is thrown if a message is too large for 
the configured fetch size.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka cpkafka-84

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #384


commit 4f175d813270ae2943dd4466c51bedbf11e819ea
Author: Anna Povzner 
Date:   2015-10-29T22:21:01Z

MINOR: Added integration tests for exceptional cases in fetching




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---