Jenkins build is back to normal : kafka-trunk-jdk7 #698

2015-10-19 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #38

2015-10-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fixed README examples on running specific tests.

--
[...truncated 4250 lines...]

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testSinkTasks PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testTaskClass PASSED

org.apache.kafka.copycat.file.FileStreamSinkTaskTest > testPutFlush PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testNormalLifecycle 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testMissingTopic PASSED
:copycat:json:checkstyleMain
:copycat:json:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:json:processTestResources UP-TO-DATE
:copycat:json:testClasses
:copycat:json:checkstyleTest
:copycat:json:test

org.apache.kafka.copycat.json.JsonConverterTest > longToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToJsonConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
nullSchemaAndMapNonStringKeysToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe 

[jira] [Commented] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-10-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963229#comment-14963229
 ] 

Ismael Juma commented on KAFKA-2472:


Thank you.

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-10-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963188#comment-14963188
 ] 

Ismael Juma commented on KAFKA-2472:


[~harsha_ch], do you mind if I take this?

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.9.0.0
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-10-19 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2472:
--
Assignee: Ismael Juma  (was: Sriharsha Chintalapani)

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_system_tests #112

2015-10-19 Thread ewen
See 

--
[...truncated 1703 lines...]


test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.num_producers=3.acks=1

status: PASS

run time:   1 minute 48.257 seconds

{"records_per_sec": 159183.909731, "mb_per_sec": 15.18}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=10

status: PASS

run time:   2 minutes 18.107 seconds

{"records_per_sec": 219863.250663, "mb_per_sec": 2.1}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=10

status: PASS

run time:   2 minutes 59.457 seconds

{"records_per_sec": 145806.413766, "mb_per_sec": 1.39}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100

status: PASS

run time:   1 minute 36.944 seconds

{"records_per_sec": 70032.715888, "mb_per_sec": 6.68}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=100

status: PASS

run time:   1 minute 58.065 seconds

{"records_per_sec": 43936.657064, "mb_per_sec": 4.19}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=1000

status: PASS

run time:   1 minute 37.144 seconds

{"records_per_sec": 8172.004384, "mb_per_sec": 7.79}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=1000

status: PASS

run time:   1 minute 51.565 seconds

{"records_per_sec": 5217.984605, "mb_per_sec": 4.98}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=1

status: PASS

run time:   1 minute 29.444 seconds

{"records_per_sec": 1035.251466, "mb_per_sec": 9.87}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=1

status: PASS

run time:   1 minute 45.678 seconds

{"records_per_sec": 614.739831, "mb_per_sec": 5.86}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=10

status: PASS

run time:   1 minute 24.620 seconds

{"records_per_sec": 260.937196, "mb_per_sec": 24.88}



test_id:
2015-10-19--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=10

status: PASS

run time:   1 minute 39.139 seconds

{"records_per_sec": 90.230619, "mb_per_sec": 8.61}



test_id:
2015-10-19--001.kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_lifecycle.security_protocol=SSL.new_consumer=True

status: PASS

run time:   1 minute 3.005 seconds



test_id:
2015-10-19--001.kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_lifecycle.security_protocol=PLAINTEXT.new_consumer=False

status: PASS

run time:   55.723 seconds



test_id:

Write access to Confluence wiki

2015-10-19 Thread Flavio Junqueira
Could anyone here please grant me write access to the Confluence wiki, please? 
I need to work on a KIP.

Thanks!
-Flavio

[jira] [Commented] (KAFKA-2618) Disable SSL renegotiation for 0.9.0.0

2015-10-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963457#comment-14963457
 ] 

Ismael Juma commented on KAFKA-2618:


I have a branch with this, will submit it after the SASL branch is merged (as I 
built it on top of that one).

> Disable SSL renegotiation for 0.9.0.0
> -
>
> Key: KAFKA-2618
> URL: https://issues.apache.org/jira/browse/KAFKA-2618
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> As discussed in KAFKA-2609, we don't have enough tests for SSL renegotiation 
> to be confident that it works well. In addition, neither the clients or the 
> server make use of renegotiation at this point.
> For 0.9.0.0, we should disable renegotiation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2618) Disable SSL renegotiation for 0.9.0.0

2015-10-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2618 started by Ismael Juma.
--
> Disable SSL renegotiation for 0.9.0.0
> -
>
> Key: KAFKA-2618
> URL: https://issues.apache.org/jira/browse/KAFKA-2618
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> As discussed in KAFKA-2609, we don't have enough tests for SSL renegotiation 
> to be confident that it works well. In addition, neither the clients or the 
> server make use of renegotiation at this point.
> For 0.9.0.0, we should disable renegotiation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2146) adding partition did not find the correct startIndex

2015-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963423#comment-14963423
 ] 

ASF GitHub Bot commented on KAFKA-2146:
---

GitHub user shangan opened a pull request:

https://github.com/apache/kafka/pull/329

KAFKA-2146. adding partition did not find the correct startIndex

TopicCommand provide a tool to add partitions for existing topics. It try 
to find the startIndex from existing partitions. There's a minor flaw in this 
process, it try to use the first partition fetched from zookeeper as the start 
partition, and use the first replica id in this partition as the startIndex.
One thing, the first partition fetched from zookeeper is not necessary to 
be the start partition. As partition id begin from zero, we should use 
partition with id zero as the start partition.
The other, broker id does not necessary begin from 0, so the startIndex is 
not necessary to be the first replica id in the start partition.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shangan/kafka trunk-KAFKA-2146

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/329.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #329


commit 8937cf75240bf48c1b70c1c2be461c5577dba3ac
Author: chenshangan 
Date:   2015-10-19T13:06:24Z

KAFKA-2146. adding partition did not find the correct startIndex




> adding partition did not find the correct startIndex 
> -
>
> Key: KAFKA-2146
> URL: https://issues.apache.org/jira/browse/KAFKA-2146
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.0
>Reporter: chenshangan
>Assignee: chenshangan
>Priority: Minor
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2146.2.patch, KAFKA-2146.patch
>
>
> TopicCommand provide a tool to add partitions for existing topics. It try to 
> find the startIndex from existing partitions. There's a minor flaw in this 
> process, it try to use the first partition fetched from zookeeper as the 
> start partition, and use the first replica id in this partition as the 
> startIndex.
> One thing, the first partition fetched from zookeeper is not necessary to be 
> the start partition. As partition id begin from zero, we should use partition 
> with id zero as the start partition.
> The other, broker id does not necessary begin from 0, so the startIndex is 
> not necessary to be the first replica id in the start partition. 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963467#comment-14963467
 ] 

Grant Henke commented on KAFKA-2580:


I have some time to work on this, but would like to see if we can get agreement 
on the approach. Currently we have 2 high level options:
   * LRU Cache expiration
   * Access time based expiration

[~toddpalino],[~vinothchandar],[~guozhang] thoughts?

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2580:
--

Assignee: Grant Henke

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963715#comment-14963715
 ] 

Gwen Shapira commented on KAFKA-2502:
-

Just note that in github trunk there is now a directory called "Docs" ( 
https://github.com/apache/kafka/tree/trunk/docs), the content of this directory 
will be ported to kafka-site as 0.9 docs when we release. We will also release 
a doc tarball for download. I recommend contributing the changes as a patch on 
trunk and not the website directly.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2671: Enable starting Kafka server with ...

2015-10-19 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/330

KAFKA-2671: Enable starting Kafka server with a Properties object



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2671

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/330.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #330


commit 3924b1da522f257bc45d51b98af247208dfc210c
Author: Ashish Singh 
Date:   2015-10-19T20:21:08Z

KAFKA-2671: Enable starting Kafka server with a Properties object




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


SendFailedException in new consumer when run with SSL

2015-10-19 Thread Rajini Sivaram
When running new consumer with SSL, debug logs show these exceptions every
time:



Thank you...

Regards,

Rajini


Re: SendFailedException in new consumer when run with SSL

2015-10-19 Thread Rajini Sivaram
Oops, pressed wrong button...

 When running new consumer with SSL, debug logs show these exceptions every
time:

[2015-10-19 20:57:43,389] DEBUG Fetch failed
(org.apache.kafka.clients.consumer.internals.Fetcher)
org.apache.kafka.clients.consumer.internals.SendFailedException


The exception occurs  because send is queued before SSL handshake is
complete. A couple of questions:

   1. Is the exception harmless?
   2. Can it be avoided since it feels like an exception that exists to
   handle edge cases like buffer overflow rather than something in a normal
   code path.



Thank you...

Regards,

Rajini


Re: SendFailedException in new consumer when run with SSL

2015-10-19 Thread Ismael Juma
I haven't looked at the code, but please file an issue so that we can
investigate this. It looks like Fetcher is not using NetworkClient
correctly.

Ismael
On 19 Oct 2015 22:02, "Rajini Sivaram"  wrote:

> Oops, pressed wrong button...
>
>  When running new consumer with SSL, debug logs show these exceptions every
> time:
>
> [2015-10-19 20:57:43,389] DEBUG Fetch failed
> (org.apache.kafka.clients.consumer.internals.Fetcher)
> org.apache.kafka.clients.consumer.internals.SendFailedException
>
>
> The exception occurs  because send is queued before SSL handshake is
> complete. A couple of questions:
>
>1. Is the exception harmless?
>2. Can it be avoided since it feels like an exception that exists to
>handle edge cases like buffer overflow rather than something in a normal
>code path.
>
>
>
> Thank you...
>
> Regards,
>
> Rajini
>


Re: Write access to Confluence wiki

2015-10-19 Thread Jun Rao
Done.

Thanks,

Jun

On Mon, Oct 19, 2015 at 11:46 AM, Hitesh  wrote:

> can you please grant me access as well?
>
> Thanks.
> HItesh
>
> On Mon, Oct 19, 2015 at 7:26 AM, Jun Rao  wrote:
>
> > Just granted you the permission.
> >
> > Thanks,
> >
> > Jun
> >
> > On Mon, Oct 19, 2015 at 2:15 AM, Flavio Junqueira 
> wrote:
> >
> > > Could anyone here please grant me write access to the Confluence wiki,
> > > please? I need to work on a KIP.
> > >
> > > Thanks!
> > > -Flavio
> >
>


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964017#comment-14964017
 ] 

Ashish K Singh commented on KAFKA-2671:
---

[~gwenshap], yes it is doable, however one would have to start metrics 
reporter, kafka server, add a shutdown hook, etc. If tomorrow Kafka decides to 
start some other services as well here, application starting Kafka will have to 
take care of these things.

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2656) Default SSL keystore and truststore config are unusable

2015-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964030#comment-14964030
 ] 

ASF GitHub Bot commented on KAFKA-2656:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/312


> Default SSL keystore and truststore config are unusable
> ---
>
> Key: KAFKA-2656
> URL: https://issues.apache.org/jira/browse/KAFKA-2656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> Default truststore for clients and default key and truststore for Kafka 
> server are set to files in /tmp along with simplistic passwords. Since no 
> sample stores are packaged with Kafka anyway, there is no value in hardcoded 
> paths and passwords as defaults. 
> Moreover these defaults prevent the use of standard javax.net.ssl properties. 
> And they force truststores to be set in Kafka configuration even when 
> certificates are signed by a trusted authority included in the Java cacerts.
> Default keystores and truststores should be replaced with JVM defaults.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka KIP meeting Oct 20 at 11:00am PST

2015-10-19 Thread Jun Rao
Hi, Everyone,

We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
attend but haven't received an invite, please let me know. The following is
the agenda.

Agenda:
1. KIP-38: Zookeeper authentication
2. KIP-37: Add namespaces in Kafka

Thanks,

Jun


[jira] [Created] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2671:
-

 Summary: Enable starting Kafka server with a Properties object
 Key: KAFKA-2671
 URL: https://issues.apache.org/jira/browse/KAFKA-2671
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Ashish K Singh
Assignee: Ashish K Singh


Kafka, as of now, can only be started with a properties file and override 
params. It makes life easier for management applications to be able to start 
Kafka with a properties object programatically.

The changes required to enable this are minimal, just a tad bit of refactoring 
of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-19 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964009#comment-14964009
 ] 

Flavio Junqueira commented on KAFKA-2017:
-

[~guozhang]

bq. With this data size a 5 node ZK should be able to handle at least 10K / sec 
writes with latency around couple of ms on HDD, and on SSD latency should be 
much less

The figures are right. With SSDs, you should be able to get sub-millisecond 
latency.

bq. unless we have scenarios that each consumer forms a single group then it 
should be fine: in this case it would be the same as offset commits in ZK 
anyways.

I didn't quite get the comment about each consumer forming a single group. How 
does impact the zookeeper traffic?

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963997#comment-14963997
 ] 

Gwen Shapira commented on KAFKA-2671:
-

You can start Kafka with KafkaConfig object (I believe LinkedIn are doing 
this). Since you can create KafkaConfig from properties object, I'm not sure 
why this additional interface is needed.

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Todd Palino (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964003#comment-14964003
 ] 

Todd Palino commented on KAFKA-2580:


It's about as graceful as an OOM, which is to say "not very". Essentially, it 
hits the limit and falls over and dies with an exception. We've run into it a 
bit with both leaking FDs from an implementation issue, and with runaway 
clients that don't do the right thing. In both situations, you are correct that 
you will generally end up seeing it as a cascading failure through the cluster.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #699

2015-10-19 Thread Apache Jenkins Server
See 

Changes:

[junrao] MINOR: Reduce logging level for controller connection failures from

[junrao] KAFKA-2656; Remove hardcoded default key and truststores

--
[...truncated 322 lines...]
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala UP-TO-DATE
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:javadoc
:kafka-trunk-jdk7:core:javadoc
cache taskArtifacts.bin 
(
 is corrupt. Discarding.
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:281:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:282:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 14 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk7:core:scaladocJar
:kafka-trunk-jdk7:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:clients:javadoc
:kafka-trunk-jdk7:log4j-appender:compileJava
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes
:kafka-trunk-jdk7:log4j-appender:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^

[GitHub] kafka pull request: KAFKA-2669; Fix LogCleanerIntegrationTest

2015-10-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/327


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Todd Palino (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963986#comment-14963986
 ] 

Todd Palino commented on KAFKA-2580:


I agree with [~jkreps] here, that having a high FD limit is not a bad thing. As 
[~jjkoshy] noted, we're already running at 400k internally (recently increased 
from 200k). Part of that is to handle growth, and part of that is to have a 
good bit of headroom if something starts to leak FDs so we have some time to 
address it before it kills the process (we alert at 50% utilization).

The LRU cache option is probably the best. You can set it to an arbitrarily 
high number (the best option here might be to cap it near, but below, your 
per-process limit) if you want to effectively disable it, and it would 
generally avoid the process of having to check and act on expiring the FDs in 
the timer option. I can see arguments for setting the default either high or 
low (and I consider 10k to be low). Regardless, as long as it's configurable 
and documented it will be fine.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963985#comment-14963985
 ] 

ASF GitHub Bot commented on KAFKA-2671:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/330

KAFKA-2671: Enable starting Kafka server with a Properties object



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2671

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/330.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #330


commit 3924b1da522f257bc45d51b98af247208dfc210c
Author: Ashish Singh 
Date:   2015-10-19T20:21:08Z

KAFKA-2671: Enable starting Kafka server with a Properties object




> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963998#comment-14963998
 ] 

Grant Henke commented on KAFKA-2580:


If we decide not to implement this and recommend setting a high FD limit, how 
gracefully does Kafka handle hitting that limit today? Has anyone seen this 
happen is a production environment? If data is spread evenly across the 
cluster, I would suspect many brokers would hit this around the same time.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2656: Remove hardcoded default key and t...

2015-10-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/312


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2656) Default SSL keystore and truststore config are unusable

2015-10-19 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2656:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 312
[https://github.com/apache/kafka/pull/312]

> Default SSL keystore and truststore config are unusable
> ---
>
> Key: KAFKA-2656
> URL: https://issues.apache.org/jira/browse/KAFKA-2656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> Default truststore for clients and default key and truststore for Kafka 
> server are set to files in /tmp along with simplistic passwords. Since no 
> sample stores are packaged with Kafka anyway, there is no value in hardcoded 
> paths and passwords as defaults. 
> Moreover these defaults prevent the use of standard javax.net.ssl properties. 
> And they force truststores to be set in Kafka configuration even when 
> certificates are signed by a trusted authority included in the Java cacerts.
> Default keystores and truststores should be replaced with JVM defaults.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Reduce logging level for controller con...

2015-10-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/280


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: SendFailedException in new consumer when run with SSL

2015-10-19 Thread Jun Rao
I don't see the debug log. Apache email sometimes removes attachments. So,
perhaps you can file a jira and include the logs there.

Thanks,

Jun

On Mon, Oct 19, 2015 at 1:57 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> When running new consumer with SSL, debug logs show these exceptions every
> time:
>
>
>
> Thank you...
>
> Regards,
>
> Rajini
>


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963845#comment-14963845
 ] 

Ismael Juma commented on KAFKA-2502:


Agreed Gwen, instructions are here: 
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes

[~aauradkar], your proposal sounds good to me.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963909#comment-14963909
 ] 

Aditya Auradkar commented on KAFKA-2502:


Thanks Gwen and Ismael. I'll have send a patch to review later this week.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Write access to Confluence wiki

2015-10-19 Thread Hitesh
can you please grant me access as well?

Thanks.
HItesh

On Mon, Oct 19, 2015 at 7:26 AM, Jun Rao  wrote:

> Just granted you the permission.
>
> Thanks,
>
> Jun
>
> On Mon, Oct 19, 2015 at 2:15 AM, Flavio Junqueira  wrote:
>
> > Could anyone here please grant me write access to the Confluence wiki,
> > please? I need to work on a KIP.
> >
> > Thanks!
> > -Flavio
>


Re: Contributor request

2015-10-19 Thread Gwen Shapira
Added. Looking forward to your contributions :)

On Sun, Oct 18, 2015 at 7:23 AM, Jakub Nowak 
wrote:

> Hi,
>
> I am interested in being added to the contributor list for Apache Kafka so
> that I may assign myself to a newbie JIRA ticket.
>
> My JIRA nickname is sinus.
>
> Jakub Nowak
>


Re: SendFailedException in new consumer when run with SSL

2015-10-19 Thread Rajini Sivaram
Ismael,

Thank you, have opened https://issues.apache.org/jira/browse/KAFKA-2672.

Regards,

Rajini


On Mon, Oct 19, 2015 at 10:30 PM, Ismael Juma  wrote:

> I haven't looked at the code, but please file an issue so that we can
> investigate this. It looks like Fetcher is not using NetworkClient
> correctly.
>
> Ismael
> On 19 Oct 2015 22:02, "Rajini Sivaram" 
> wrote:
>
> > Oops, pressed wrong button...
> >
> >  When running new consumer with SSL, debug logs show these exceptions
> every
> > time:
> >
> > [2015-10-19 20:57:43,389] DEBUG Fetch failed
> > (org.apache.kafka.clients.consumer.internals.Fetcher)
> > org.apache.kafka.clients.consumer.internals.SendFailedException
> >
> >
> > The exception occurs  because send is queued before SSL handshake is
> > complete. A couple of questions:
> >
> >1. Is the exception harmless?
> >2. Can it be avoided since it feels like an exception that exists to
> >handle edge cases like buffer overflow rather than something in a
> normal
> >code path.
> >
> >
> >
> > Thank you...
> >
> > Regards,
> >
> > Rajini
> >
>


[jira] [Commented] (KAFKA-2669) Fix LogCleanerIntegrationTest

2015-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964118#comment-14964118
 ] 

ASF GitHub Bot commented on KAFKA-2669:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/327


> Fix LogCleanerIntegrationTest
> -
>
> Key: KAFKA-2669
> URL: https://issues.apache.org/jira/browse/KAFKA-2669
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 0.9.0.0
>
>
> LogCleanerIntegrationTest calls LogCleaner.awaitCleaned() to wait until 
> cleaner has processed up to given offset. However, existing awaitCleaned() 
> implementation doesn't wait for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-19 Thread Todd Palino (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964228#comment-14964228
 ] 

Todd Palino commented on KAFKA-2017:


I think we definitely need to maintain the ability to get that type of 
information, whether it's within the protocol or via an admin endpoint. Being 
able to tell what consumers exist in a group, as well as what partitions each 
of them owns, is important information to have. And it should be available not 
just from the consumers, but from the coordinator itself. That way you can 
debug issues where they have gone out of sync, and you can provide tools (such 
as burrow) to provide consumer status information independently.

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Capture stderr in ConsumerPerformanceSe...

2015-10-19 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/331

MINOR: Capture stderr in ConsumerPerformanceService.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
minor-capture-consumer-performance-stderr

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/331.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #331


commit 83df8c0f50135193470cbbb33eba61afe7e80d50
Author: Ewen Cheslack-Postava 
Date:   2015-10-19T23:59:17Z

MINOR: Capture stderr in ConsumerPerformanceService.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Multithreading in producer code.

2015-10-19 Thread Gaurav Sharma
Hi All,

I'm in the process of writing a producer which reads a binary file, decodes
it according to a predefined structure (message length followed by the
message). There are 7 different message types in the file, example
message1, message2 message7.
In the producer code I have created 7 topics for each message. I also
created a partition class, which uses round robin technique to select the
partition based on their size.  Below is the partition code.

/*/
public int partition(Object key, int a_numPartitions){

int


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964220#comment-14964220
 ] 

Guozhang Wang commented on KAFKA-2017:
--

What I mean is that in practice #.consumer groups should be << #.consumers, and 
hence group change frequency should be << consumer offset commits frequency 
that can probably be more suitable for ZK. We moved offset commits from ZK to 
Kafka because its write frequency is based per-consumer and hence too large for 
ZK some times.

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2673) Log JmxTool output to logger

2015-10-19 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-2673:
---

 Summary: Log JmxTool output to logger
 Key: KAFKA-2673
 URL: https://issues.apache.org/jira/browse/KAFKA-2673
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.8.2.1
Reporter: Eno Thereska
Assignee: Eno Thereska
Priority: Trivial
 Fix For: 0.8.1.2


Currently JmxTool outputs the data into a CSV file. It could be of value to 
have the data sent to a logger specified in a log4j configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #40

2015-10-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2669; Fix LogCleaner.awaitCleaned for LogCleanerIntegrationTest

--
[...truncated 4250 lines...]

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testSinkTasks PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testTaskClass PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > testSourceTasks 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testSourceTasksStdin PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > testTaskClass 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED
:copycat:json:checkstyleMain
:copycat:json:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:json:processTestResources UP-TO-DATE
:copycat:json:testClasses
:copycat:json:checkstyleTest
:copycat:json:test

org.apache.kafka.copycat.json.JsonConverterTest > longToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToJsonConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
nullSchemaAndMapNonStringKeysToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some 

[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964225#comment-14964225
 ] 

Guozhang Wang commented on KAFKA-2017:
--

Another point worth considering is that for operations do we want to add the 
ability in admin tools to query consumer assignment in addition to consumer 
group metadata. Today the coordinator has to remember both of them in memory to 
handle consumer requests, but we can probably clean it to reduce footprint 
after the rebalance has full settled. But if we want to ad, for example, a 
consumer group metadata request to return the group membership info as well as 
the assignment for admin tools, then we may need to keep it in memory, and 
moving forward on persistent storage. And if this data becomes very large 
enough then Kafka might be a better place than ZK.

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Capture stderr in ConsumerPerformanceSe...

2015-10-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/331


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964297#comment-14964297
 ] 

Ashish K Singh commented on KAFKA-2671:
---

Correct me if I am wrong, but below are the options one has to start Kafka 
broker programmatically.

1. Pass a config file and overrides.
2. Start Kafka with KafkaConfig, as suggested by Gwen.

Now, consider a scenario where an application has required Kafka properties 
available in a Properties object. Following are the ways Kafka can be started 
by the app.

1. Create a conf file, and then start Kafka passing the conf file. This seems 
to be an overkill. Passing conf file is ideal while starting from CLI, but I am 
not sure if that is true when one wants to start programmatically.
2. Create a list of overrides and pass that as an argument while starting 
Kafka. Works but is a hack-ish solution.
3. Create KafkaConfig and use that to create Kafka. The app will have to take 
care of starting other entities that {{kafka.Kafka}} starts right now, like, 
metrics reporter, add a shutdown hook, etc. If tomorrow Kafka decides to start 
some other services as well here, application starting Kafka will also have to 
take care of these things.

The interface, suggested in the PR, will enable starting Kafka by just passing 
a Properties object. Unlike, the 3rd option above, this will be future proof 
and app won't have to take care of bunch of other things that {{kafka.Kafka}} 
does right now. I agree that this is not a very hard requirement and one can 
get around with the options suggested above. Its more for programmers 
convenience then anything else, makes Kafka easy to embed in other apps.

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2672) SendFailedException when new consumer is run with SSL

2015-10-19 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-2672:
-

 Summary: SendFailedException when new consumer is run with SSL
 Key: KAFKA-2672
 URL: https://issues.apache.org/jira/browse/KAFKA-2672
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Rajini Sivaram
Assignee: Neha Narkhede
 Fix For: 0.9.0.0



When running new consumer with SSL, debug logs show these exceptions every time:

{quote}
[2015-10-19 20:57:43,389] DEBUG Fetch failed 
(org.apache.kafka.clients.consumer.internals.Fetcher)
org.apache.kafka.clients.consumer.internals.SendFailedException 
{quote}

The exception occurs  because send is queued before SSL handshake is complete. 
I am not sure if the exception is harmless, but it will be good to avoid the 
exception either way since it feels like an exception that exists to handle 
edge cases like buffer overflow rather than something in a normal code path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2669) Fix LogCleanerIntegrationTest

2015-10-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2669.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 327
[https://github.com/apache/kafka/pull/327]

> Fix LogCleanerIntegrationTest
> -
>
> Key: KAFKA-2669
> URL: https://issues.apache.org/jira/browse/KAFKA-2669
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 0.9.0.0
>
>
> LogCleanerIntegrationTest calls LogCleaner.awaitCleaned() to wait until 
> cleaner has processed up to given offset. However, existing awaitCleaned() 
> implementation doesn't wait for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #39

2015-10-19 Thread Apache Jenkins Server
See 

Changes:

[junrao] MINOR: Reduce logging level for controller connection failures from

[junrao] KAFKA-2656; Remove hardcoded default key and truststores

--
[...truncated 6367 lines...]
org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testSourceTasksStdin PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > testTaskClass 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testSinkTasks PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testTaskClass PASSED

org.apache.kafka.copycat.file.FileStreamSinkTaskTest > testPutFlush PASSED
:copycat:json:checkstyleMain
:copycat:json:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:json:processTestResources UP-TO-DATE
:copycat:json:testClasses
:copycat:json:checkstyleTest
:copycat:json:test

org.apache.kafka.copycat.json.JsonConverterTest > longToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToJsonConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
nullSchemaAndMapNonStringKeysToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path 

Jenkins build is back to normal : kafka-trunk-jdk7 #700

2015-10-19 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964255#comment-14964255
 ] 

Guozhang Wang commented on KAFKA-2671:
--

[~singhashish] I am still not clear about the motivation of this patch as 
[~gwenshap], could you elaborate a bit more?

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-19 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964302#comment-14964302
 ] 

Jason Gustafson commented on KAFKA-2017:


I'm hoping that we can modify the ConsumerMetadataRequest to return group 
metadata (including member assignments) when it is received by the group's 
coordinator. Admin tools would have to send one metadata request to discover 
the coordinator and one to get the associated metadata, but I think that would 
address the major operational concern. Perhaps if we have that, then we can 
adopt [~junrao]'s suggestion and punt on persistence for now. However, 
supporting this feature going forward would probably push us toward Kafka 
persistence for group metadata, unless we're willing to store full assignments 
(which can be quite large) in Zookeeper. Any thoughts?

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964309#comment-14964309
 ] 

Gwen Shapira commented on KAFKA-2671:
-

What's the use-case for starting Kafka server as a Singleton inside another 
application? 

The reason I'm asking is that I've actually seen this requirement before, and 
it seemed like a rather misguided attempt to use Kafka as a local queue for a 
webserver (and use a local Kafka rather than a producer to a remote Kafka). 
I'm rather concerned about enabling such use-cases, as they can lead to more 
problems down the road.

I hope you have a better use-case.

Also, are you sure you want to instantiate Kafka with a start() method to a 
singleton? This is kind of non-standard.

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964360#comment-14964360
 ] 

Gwen Shapira commented on KAFKA-2671:
-

In second look, metrics are currently handled in both KafkaServer and the Kafka 
singleton. 
This looks wrong. 

Perhaps you can figure out why and see if we can safely move all metrics 
handling inside KafkaServer?

After that, your wrapper will just need to handle the shutdownhook and call 
KafkaServer, which sounds reasonable to me. 


> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #41

2015-10-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Capture stderr in ConsumerPerformanceService.

--
[...truncated 4250 lines...]

org.apache.kafka.copycat.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testSinkTasks PASSED

org.apache.kafka.copycat.file.FileStreamSinkConnectorTest > testTaskClass PASSED

org.apache.kafka.copycat.file.FileStreamSinkTaskTest > testPutFlush PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testNormalLifecycle 
PASSED

org.apache.kafka.copycat.file.FileStreamSourceTaskTest > testMissingTopic PASSED
:copycat:json:checkstyleMain
:copycat:json:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:json:processTestResources UP-TO-DATE
:copycat:json:testClasses
:copycat:json:checkstyleTest
:copycat:json:test

org.apache.kafka.copycat.json.JsonConverterTest > longToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToJsonConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
nullSchemaAndMapNonStringKeysToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe 

[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964382#comment-14964382
 ] 

Jay Kreps commented on KAFKA-2580:
--

Yeah as [~toddpalino] says it is totally not graceful--it's a hard limit like 
disk space or memory. We do have per-ip connection limits in place now, though, 
so if you use that, the cluster overall should not be impacted by client leaks 
you have to actually have more clients than your limit can support.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964394#comment-14964394
 ] 

Jay Kreps commented on KAFKA-2671:
--

The use case for embedding Kafka is to make it work with existing config, 
monitoring, and deployment tools that require some integration work (logic 
specific to that framework) and also to support integration testing. Rather 
than trying to provide plugin hooks for this which are usually kind of limited 
and hacky you can just have people create their own main method and that way 
they can add whatever logic on startup or shutdown they want. Plus just as a 
matter of cleanliness it's nice to be able to have a clean path to do 
programmatically whatever we're doing in the command line tool. We've tried to 
support this usage pattern since early on so if it has gotten painful it'd be 
nice to clean it up a bit.

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #42

2015-10-19 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: check logic of KAFKA-2515 should be on buffer.limit()

--
[...truncated 3176 lines...]

org.apache.kafka.clients.consumer.internals.CoordinatorTest > testRefreshOffset 
PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testCommitOffsetSyncCoordinatorNotAvailable PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testRefreshOffsetWithNoFetchableOffsets PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testCommitOffsetAsyncWithDefaultCallback PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testCommitOffsetAsyncNotCoordinator PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testUnknownPartitionAssignmentStrategy PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testUnknownConsumerId PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testResetGeneration PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testRefreshOffsetNotCoordinatorForConsumer PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testCommitOffsetMetadataTooLarge PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testCommitOffsetAsyncDisconnected PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testCoordinatorNotAvailable PASSED

org.apache.kafka.clients.consumer.internals.CoordinatorTest > 
testNotCoordinator PASSED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > 
multiSend PASSED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > wakeup 
PASSED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > 
schedule PASSED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > send 
PASSED

org.apache.kafka.clients.producer.ProducerRecordTest > testEqualsAndHashCode 
PASSED

org.apache.kafka.clients.producer.MockProducerTest > testManualCompletion PASSED

org.apache.kafka.clients.producer.MockProducerTest > testAutoCompleteMock PASSED

org.apache.kafka.clients.producer.MockProducerTest > testPartitioner PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testSerializerClose PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorFailureCloseResource PASSED

org.apache.kafka.clients.producer.RecordSendTest > testTimeout PASSED

org.apache.kafka.clients.producer.RecordSendTest > testError PASSED

org.apache.kafka.clients.producer.RecordSendTest > testBlocking PASSED

org.apache.kafka.clients.producer.internals.DefaultPartitionerTest > 
testKeyPartitionIsStable PASSED

org.apache.kafka.clients.producer.internals.DefaultPartitionerTest > 
testRoundRobinWithUnavailablePartitions PASSED

org.apache.kafka.clients.producer.internals.SenderTest > testQuotaMetrics PASSED

org.apache.kafka.clients.producer.internals.SenderTest > testRetries PASSED

org.apache.kafka.clients.producer.internals.SenderTest > testSimple PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > 
testStressfulSituation PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > testBlockTimeout 
PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > 
testCantAllocateMoreMemoryThanWeHave PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > testSimple PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > 
testDelayedAllocation PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testRetryBackoff PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testNextReadyCheckDelay PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testStressfulSituation PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testFlush 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testFull 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testAbortIncompleteBatches PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testExpiredBatches PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testLinger 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testPartialDrain PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testAppendLarge PASSED

org.apache.kafka.common.serialization.SerializationTest > testStringSerializer 
PASSED

org.apache.kafka.common.serialization.SerializationTest > testIntegerSerializer 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testBasicTypes PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefault PASSED


Multithreading in the producer code.

2015-10-19 Thread Gaurav Sharma
Hi All,

Sorry for the previous incomplete mail.

I'm new to Kafka and in the process of writing a producer. Just to give you
the context, my producer reads a binary file, decodes it according to a
predefined structure (message length followed by the message) and publishes
the decoded messages based on its type to the topic. For instance consider
there are 7 different message types in the file, example message1,
message2 message7 and I have created 7 topics for each of the message.

I'm using only one producer object to send each message to different topics.

I also created a partition class, which uses round robin technique to
select the partition based on their size.  Below is the partition code:

/**Partition Code***/

public int partition(Object key, int a_numPartitions){

 int partitionId = counter.incrementAndGet() % a_numPartitions;
 if(counter.get()> 65536 ){
   counter.set(0);
 }
  return partitionId;
}

/ends**/

Right now I'm working with three brokers and replication factor set to 2 .
I'm getting a throughput of 4k message per second. Each message is around
86 bytes.

My question to you is, how can I increase the throughput at the producer
end? Do I need to create multiple producer object ? Will multithreading
help? If yes, what's the best way of doing it? What do you suggest to
improve the performance.

Any help would be highly appreciated.


Thanks,
Gaurav Sharma


[GitHub] kafka pull request: HOTFIX: check logic of KAFKA-2515 should be on...

2015-10-19 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/332

HOTFIX: check logic of KAFKA-2515 should be on buffer.limit()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2515-hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/332.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #332


commit 76c4bde6e3e7c0d8e816f5d4393d231a91f34185
Author: Guozhang Wang 
Date:   2015-10-20T01:27:38Z

KAFKA-2515.hf




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964358#comment-14964358
 ] 

ASF GitHub Bot commented on KAFKA-2515:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/332

HOTFIX: check logic of KAFKA-2515 should be on buffer.limit()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2515-hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/332.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #332


commit 76c4bde6e3e7c0d8e816f5d4393d231a91f34185
Author: Guozhang Wang 
Date:   2015-10-20T01:27:38Z

KAFKA-2515.hf




> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: check logic of KAFKA-2515 should be on...

2015-10-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/332


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2515) handle oversized messages properly in new consumer

2015-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964407#comment-14964407
 ] 

ASF GitHub Bot commented on KAFKA-2515:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/332


> handle oversized messages properly in new consumer
> --
>
> Key: KAFKA-2515
> URL: https://issues.apache.org/jira/browse/KAFKA-2515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When there is an oversized message in the broker, it seems that the new 
> consumer just silently gets stuck. We should at least log an error when this 
> happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2672) SendFailedException when new consumer is run with SSL

2015-10-19 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964556#comment-14964556
 ] 

Ismael Juma commented on KAFKA-2672:


`poll` in `ConsumerNetworkClient`:

{code}
private void poll(long timeout, long now) {
// send all the requests we can send now
pollUnsentRequests(now);

// ensure we don't poll any longer than the deadline for
// the next scheduled task
timeout = Math.min(timeout, delayedTasks.nextTimeout(now));
clientPoll(timeout, now);

// execute scheduled tasks
now = time.milliseconds();
delayedTasks.poll(now);

// try again to send requests since buffer space may have been
// cleared or a connect finished in the poll
pollUnsentRequests(now);

// fail all requests that couldn't be sent
clearUnsentRequests(now);

}
{code}

`clearUnsentRequests` raises `SendFailedException` for all unsent requests, 
which can happen if the handshake is still happening.

[~hachikuji], thoughts?

> SendFailedException when new consumer is run with SSL
> -
>
> Key: KAFKA-2672
> URL: https://issues.apache.org/jira/browse/KAFKA-2672
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Rajini Sivaram
>Assignee: Neha Narkhede
> Fix For: 0.9.0.0
>
>
> When running new consumer with SSL, debug logs show these exceptions every 
> time:
> {quote}
> [2015-10-19 20:57:43,389] DEBUG Fetch failed 
> (org.apache.kafka.clients.consumer.internals.Fetcher)
> org.apache.kafka.clients.consumer.internals.SendFailedException 
> {quote}
> The exception occurs  because send is queued before SSL handshake is 
> complete. I am not sure if the exception is harmless, but it will be good to 
> avoid the exception either way since it feels like an exception that exists 
> to handle edge cases like buffer overflow rather than something in a normal 
> code path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964345#comment-14964345
 ] 

Ashish K Singh commented on KAFKA-2671:
---

bq. What's the use-case for starting Kafka server as a Singleton inside another 
application?

I am trying to wrap kafka.Kafka in an app that takes care of config management 
in a secure way. The goal of the app is not to use Kafka, but to just start it. 
To be honest its a simple wrapper :).

bq. The reason I'm asking is that I've actually seen this requirement before, 
and it seemed like a rather misguided attempt to use Kafka as a local queue for 
a webserver (and use a local Kafka rather than a producer to a remote Kafka). 
I'm rather concerned about enabling such use-cases, as they can lead to more 
problems down the road.

Misguided attempts are still doable with the existing options, right?

bq. Also, are you sure you want to instantiate Kafka with a start() method to a 
singleton? This is kind of non-standard.

That is true. Once I get a go on the idea, I can handle that in PR by 
extracting to a separate class.

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2671) Enable starting Kafka server with a Properties object

2015-10-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964353#comment-14964353
 ] 

Gwen Shapira commented on KAFKA-2671:
-

"takes care of config management in a secure way" 

if there's a security issue with the way Kafka currently manages its 
configuration, please speak up :)
Is this a work-around to loading SSL configuration from an app?

> Enable starting Kafka server with a Properties object
> -
>
> Key: KAFKA-2671
> URL: https://issues.apache.org/jira/browse/KAFKA-2671
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Kafka, as of now, can only be started with a properties file and override 
> params. It makes life easier for management applications to be able to start 
> Kafka with a properties object programatically.
> The changes required to enable this are minimal, just a tad bit of 
> refactoring of kafka.Kafka. The changes must maintain current behavior intact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-19 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963605#comment-14963605
 ] 

Aditya Auradkar commented on KAFKA-2502:


[~ijuma] - I assume I need to submit changes to the 0.9 site here: 
http://kafka.apache.org/documentation.html

I'll add the following changes:
1. Add newly introduced configs to the "Configuration" section
2. Add a section on quota design to the "Design" section
3. Add a piece on setting quotas dynamically via ConfigCommand in "Basic Kafka 
Operations"
4. In "Monitoring" add suggested metrics to monitor.

Sound ok?

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-19 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963620#comment-14963620
 ] 

Jason Gustafson commented on KAFKA-2017:


[~junrao] I'm not sure that Guozhang's proposal would require a rebalance on 
failover (at least not usually). The new coordinator reads the consumer ids and 
generation from zookeeper and the consumers can continue consuming with the 
current assignment. There is an edge case between the time that metadata is 
written to zookeeper and the time that all members have received the new 
assignment where a coordinator failover could require a rebalance, but this 
should be rare.

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963632#comment-14963632
 ] 

Jay Kreps commented on KAFKA-2580:
--

10TB of space with 1GB segment files means about 10k FDs (though probably a bit 
more since the last segment would be, on average, only 512M). A file descriptor 
is pretty cheap and the perf seems pretty reasonable even with a lot of them. 
So just keeping the files open should not be a huge blocker--changing your FD 
max isn't a bad thing. So let's only do this if we can do it in a way the code 
gets better and cleaner.

If we do do it I really think we have to provide a hard bound on the total 
number of FDs. I agree that it could be a bit simpler and more efficient to 
just have a timeout after which FDs are closed, but since you have to set a 
hard limit on FDs this doesn't quite solve the problem--you still have to model 
which timeout will keep you under that limit. But if you do that you might as 
well just model the total FD count which is simpler to reason about and just 
raise the FD limit itself.

The only concern with this approach is that there could be a situation in which 
your active set of FDs is larger than the cache size and you end up opening and 
closing a file on each request. It's true that this could be a performance 
problem for pathological open file settings (e.g. 0). However in general file 
open and close isn't too expensive (maybe 1-3 disk accesses) so as long as it 
isn't too frequent it should be okay. A default of 10k should generally be very 
safe since access tends to be concentrated on active segments.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-38: ZooKeeper Authentication

2015-10-19 Thread Flavio Junqueira
I've created the following KIP and I'd appreciate any comment on the proposal:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-38%3A+ZooKeeper+Authentication
 


This is in progress and there is code for most of it already. Please check the 
corresponding PR if you're interested.

Thanks,
-Flavio

[jira] [Comment Edited] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963632#comment-14963632
 ] 

Jay Kreps edited comment on KAFKA-2580 at 10/19/15 5:21 PM:


10TB of space with 1GB segment files means about 10k FDs (though probably a bit 
more since the last segment would be, on average, only 512M). A file descriptor 
is pretty cheap and the perf seems pretty reasonable even with a lot of them. 
So just keeping the files open should not be a huge blocker--changing your FD 
max isn't a bad thing. So let's only do this if we can do it in a way the code 
gets better and cleaner.

If we do do it I really think we have to provide a hard bound on the total 
number of FDs. I agree that it could be a bit simpler and more efficient to 
just have a timeout after which FDs are closed, but since you have to set a 
hard limit on FDs this doesn't quite solve the problem--you still have to model 
which timeout will keep you under that limit. But if you do that you might as 
well just model the total FD count which is simpler to reason about and just 
raise the FD limit itself.

So a lot of this comes down to the implementation. A naive 10k item LRU cache 
could easily be far more memory hungry than having 50k open FDs, plus being in 
heap this would add a huge number of objects to manage.

The only concern with this approach is that there could be a situation in which 
your active set of FDs is larger than the cache size and you end up opening and 
closing a file on each request. It's true that this could be a performance 
problem for pathological open file settings (e.g. 0). However in general file 
open and close isn't too expensive (maybe 1-3 disk accesses) so as long as it 
isn't too frequent it should be okay. A default of 10k should generally be very 
safe since access tends to be concentrated on active segments.


was (Author: jkreps):
10TB of space with 1GB segment files means about 10k FDs (though probably a bit 
more since the last segment would be, on average, only 512M). A file descriptor 
is pretty cheap and the perf seems pretty reasonable even with a lot of them. 
So just keeping the files open should not be a huge blocker--changing your FD 
max isn't a bad thing. So let's only do this if we can do it in a way the code 
gets better and cleaner.

If we do do it I really think we have to provide a hard bound on the total 
number of FDs. I agree that it could be a bit simpler and more efficient to 
just have a timeout after which FDs are closed, but since you have to set a 
hard limit on FDs this doesn't quite solve the problem--you still have to model 
which timeout will keep you under that limit. But if you do that you might as 
well just model the total FD count which is simpler to reason about and just 
raise the FD limit itself.

The only concern with this approach is that there could be a situation in which 
your active set of FDs is larger than the cache size and you end up opening and 
closing a file on each request. It's true that this could be a performance 
problem for pathological open file settings (e.g. 0). However in general file 
open and close isn't too expensive (maybe 1-3 disk accesses) so as long as it 
isn't too frequent it should be okay. A default of 10k should generally be very 
safe since access tends to be concentrated on active segments.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-10-19 Thread Vinoth Chandar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963574#comment-14963574
 ] 

Vinoth Chandar commented on KAFKA-2580:
---

[~jjkoshy] Good point. if I understand correctly, even if say all consumers 
start bootstrapping with startTime=earliest, which can just force opening of 
all file handles, an LRU based scheme would keep closing the file handles 
internally from oldest to latest file, which still is good behaviour. In order 
to lessen the impact of fs.close() on old file by delegating to a background 
thread, which takes a config that caps the number of items in the file handle 
cache. 

I like the cache approach better since it will be one place thru which all 
access go,so future feature transparently play nicely with overall system 
limits. 

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-10-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2472 started by Ismael Juma.
--
> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-19 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963591#comment-14963591
 ] 

Jun Rao commented on KAFKA-2017:


Another option that we have in the 0.9.0 release is not to persist the consumer 
state at all, but simply relax the checking during offset commit. So, we can 
allow any consumer to commit the offsets, regardless of its consumer id and 
generation id. The current checking during offset commit really helps when 
there is soft failure in the consumer (e.g., due to GC), which should be rare 
in the new consumer given the high default session timeout. So, in the normal 
case, I am not sure if this checking buys us much. The main benefit of this 
approach is that the implementation will be much easier in 0.9.0. This will 
give us more time to think through the story (ZK based vs Kafka based) on 
persisting consumer states post 0.9.0. Also, the current proposal still 
requires existing consumers to rebalance during coordinator failover. If we can 
avoid that, it would be even better.

[~toddpalino], currently the offset topic is auto-created on first use. 
Hopefully by that time, the whole Kafka cluster is already started.

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)