[jira] [Updated] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2016-03-22 Thread James Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Cheng updated KAFKA-3042:
---
Attachment: state-change.log
server.log.2016-03-23-01
controller.log

[~fpj], I attached logs from one of our brokers which had this issue. Things 
seem to start happening around 2016-03-23 01:04:58.

This broker has been stuck in this state for nearly 5 hours. 


> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
> Attachments: controller.log, server.log.2016-03-23-01, 
> state-change.log
>
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Request for edit access to Kafka wiki

2016-03-22 Thread Ganesh Nikam
Hi All,



I want to publish C++ Kafka client. I have my git repository ready. Now I
want add entry on Kafka “Clients” page (Confluence wiki page) for this new
client.

I did create my login for the Confluence and login with that. But I am not
able to edit the page. Do I require to do some other steps to get the write
access ?



If you can give me the write access then that will be very helpful. Here is
my Confluence user name:

User name : ganesh.nikam





Regards

Ganesh Nikam


[jira] [Updated] (KAFKA-3448) IPV6 Regex is missing % character

2016-03-22 Thread Soumyajit Sahu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumyajit Sahu updated KAFKA-3448:
--
Fix Version/s: 0.10.1.0
   Status: Patch Available  (was: Open)

> IPV6 Regex is missing % character
> -
>
> Key: KAFKA-3448
> URL: https://issues.apache.org/jira/browse/KAFKA-3448
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: Windows,Linux
>Reporter: Soumyajit Sahu
> Fix For: 0.10.1.0
>
>
> IPV6 addresses could have the % character.
> When an address is written textually, the zone index is appended to the 
> address, separated by a percent sign (%).
> Reference: https://en.wikipedia.org/wiki/IPv6_address
> Example: Link-local IPv6 Address . . . . . : 
> fe80::b1da:69ca:57f7:63d8%3(Preferred)
> Then, the broker would throw the IllegalStateException(s"connectionId has 
> unexpected format: $connectionId")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1141

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3431: Remove `o.a.k.common.BrokerEndPoint` in favour of `Node`

--
[...truncated 7550 lines...]

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > testKTable 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testValueGetter PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > testNumProcesses 
PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamBranchTest > 
testKStreamBranch PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testAggBasic PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testJoin PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testStateStore 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.KeyValueTest > testHashcode PASSED

org.apache.kafka.streams.KeyValueTest > testEquals PASSED

org.apache.kafka.streams.state.internals.OffsetCheckpointTest > testReadWrite 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchBefore PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testInitialLoading PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRolling 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testSegmentMaintenance PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutSameKeyTimestamp PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchAfter PASSED


[jira] [Commented] (KAFKA-3448) IPV6 Regex is missing % character

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207852#comment-15207852
 ] 

ASF GitHub Bot commented on KAFKA-3448:
---

GitHub user soumyajit-sahu opened a pull request:

https://github.com/apache/kafka/pull/1120

KAFKA-3448: add % character to ipv6 regex

IPV6 address can contain % character

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Microsoft/kafka fixIPV6RegexPattern

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1120.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1120


commit 6eac36438ab823043e1771ce8b04d027e1b8e232
Author: Som Sahu 
Date:   2016-03-23T04:16:04Z

add % character to ipv6 regex




> IPV6 Regex is missing % character
> -
>
> Key: KAFKA-3448
> URL: https://issues.apache.org/jira/browse/KAFKA-3448
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: Windows,Linux
>Reporter: Soumyajit Sahu
>
> IPV6 addresses could have the % character.
> When an address is written textually, the zone index is appended to the 
> address, separated by a percent sign (%).
> Reference: https://en.wikipedia.org/wiki/IPv6_address
> Example: Link-local IPv6 Address . . . . . : 
> fe80::b1da:69ca:57f7:63d8%3(Preferred)
> Then, the broker would throw the IllegalStateException(s"connectionId has 
> unexpected format: $connectionId")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3448: add % character to ipv6 regex

2016-03-22 Thread soumyajit-sahu
GitHub user soumyajit-sahu opened a pull request:

https://github.com/apache/kafka/pull/1120

KAFKA-3448: add % character to ipv6 regex

IPV6 address can contain % character

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Microsoft/kafka fixIPV6RegexPattern

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1120.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1120


commit 6eac36438ab823043e1771ce8b04d027e1b8e232
Author: Som Sahu 
Date:   2016-03-23T04:16:04Z

add % character to ipv6 regex




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10-jdk7 #7

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3431: Remove `o.a.k.common.BrokerEndPoint` in favour of `Node`

--
[...truncated 1591 lines...]

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED


[jira] [Commented] (KAFKA-3448) IPV6 Regex is missing % character

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207838#comment-15207838
 ] 

ASF GitHub Bot commented on KAFKA-3448:
---

Github user soumyajit-sahu closed the pull request at:

https://github.com/apache/kafka/pull/1119


> IPV6 Regex is missing % character
> -
>
> Key: KAFKA-3448
> URL: https://issues.apache.org/jira/browse/KAFKA-3448
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: Windows,Linux
>Reporter: Soumyajit Sahu
>
> IPV6 addresses could have the % character.
> When an address is written textually, the zone index is appended to the 
> address, separated by a percent sign (%).
> Reference: https://en.wikipedia.org/wiki/IPv6_address
> Example: Link-local IPv6 Address . . . . . : 
> fe80::b1da:69ca:57f7:63d8%3(Preferred)
> Then, the broker would throw the IllegalStateException(s"connectionId has 
> unexpected format: $connectionId")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3448: Include % character in IPV6 Regex

2016-03-22 Thread soumyajit-sahu
Github user soumyajit-sahu closed the pull request at:

https://github.com/apache/kafka/pull/1119


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3448) IPV6 Regex is missing % character

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207837#comment-15207837
 ] 

ASF GitHub Bot commented on KAFKA-3448:
---

GitHub user soumyajit-sahu opened a pull request:

https://github.com/apache/kafka/pull/1119

KAFKA-3448: Include % character in IPV6 Regex

IPV6 addresses could have the % character.
When an address is written textually, the zone index is appended to the 
address, separated by a percent sign (%).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Microsoft/kafka fixIPV6Regex

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1119.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1119


commit 7710b367fd26a0c41565f35200748c23616b4477
Author: Gwen Shapira 
Date:   2015-11-07T03:46:30Z

Changing version to 0.9.0.0

commit 27d44afe664bff45d62f72335fdbb56671561512
Author: Jason Gustafson 
Date:   2015-11-08T19:38:50Z

KAFKA-2723: new consumer exception cleanup (0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #452 from hachikuji/KAFKA-2723

commit 32cd3e35f1ea8251a51860cc48a44fb2fbfd7c0e
Author: Jason Gustafson 
Date:   2015-11-08T20:36:42Z

HOTFIX: fix group coordinator edge cases around metadata storage callback 
(0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #453 from hachikuji/hotfix-group-coordinator-0.9

commit 1fd79f57b4a73308c59b797974086ca09af19b98
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T04:41:35Z

KAFKA-2480: Handle retriable and non-retriable exceptions thrown by sink 
tasks.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #450 from ewencp/kafka-2480-unrecoverable-task-errors

(cherry picked from commit f4b87deefecf4902992a84d4a3fe3b99a94ff72b)
Signed-off-by: Gwen Shapira 

commit 48013222fd426685d2907a760290d2e7c7d25aea
Author: Geoff Anderson 
Date:   2015-11-09T04:52:16Z

KAFKA-2773; 0.9.0 branch)Fixed broken vagrant provision scripts for static 
zk/broker cluster

Author: Geoff Anderson 

Reviewers: Gwen Shapira

Closes #455 from granders/KAFKA-2773-0.9.0-vagrant-fix

commit 417e283d643d8865aa3e79dffa373c8cc853d78f
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T06:11:03Z

KAFKA-2774: Rename Copycat to Kafka Connect

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #456 from ewencp/kafka-2774-rename-copycat

(cherry picked from commit f2031d40639ef34c1591c22971394ef41c87652c)
Signed-off-by: Gwen Shapira 

commit 02fbdaa4475fd12a0fdccaa103bf27cbc1bfd077
Author: Rajini Sivaram 
Date:   2015-11-09T15:23:47Z

KAFKA-2779; Close SSL socket channel on remote connection close

Close socket channel in finally block to avoid file descriptor leak when 
remote end closes the connection

Author: Rajini Sivaram 

Reviewers: Ismael Juma , Jun Rao 

Closes #460 from rajinisivaram/KAFKA-2779

(cherry picked from commit efbebc6e843850b7ed9a1d015413c99f114a7d92)
Signed-off-by: Jun Rao 

commit fdefef9536acf8569607a980a25237ef4794f645
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T17:10:20Z

KAFKA-2781; Only require signing artifacts when uploading archives.

Author: Ewen Cheslack-Postava 

Reviewers: Jun Rao 

Closes #461 from ewencp/kafka-2781-no-signing-for-install

(cherry picked from commit a24f9a23a6d8759538e91072e8d96d158d03bb63)
Signed-off-by: Jun Rao 

commit 7471394c5485a2114d35c6345d95e161a0ee6586
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:19:27Z

KAFKA-2776: Fix lookup of schema conversion cache size in JsonConverter.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #458 from ewencp/kafka-2776-json-converter-cache-config-fix

(cherry picked from commit e9fc7b8c84908ae642339a2522a79f8bb5155728)
Signed-off-by: Gwen Shapira 

commit 3aa3e85d942b514cbe842a6b3c3fe214c0ecf401
Author: Jason Gustafson 
Date:   2015-11-09T18:26:17Z

HOTFIX: bug updating cache when loading group metadata

The bug causes only the first instance of group metadata in 

[GitHub] kafka pull request: KAFKA-3448: Include % character in IPV6 Regex

2016-03-22 Thread soumyajit-sahu
GitHub user soumyajit-sahu opened a pull request:

https://github.com/apache/kafka/pull/1119

KAFKA-3448: Include % character in IPV6 Regex

IPV6 addresses could have the % character.
When an address is written textually, the zone index is appended to the 
address, separated by a percent sign (%).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Microsoft/kafka fixIPV6Regex

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1119.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1119


commit 7710b367fd26a0c41565f35200748c23616b4477
Author: Gwen Shapira 
Date:   2015-11-07T03:46:30Z

Changing version to 0.9.0.0

commit 27d44afe664bff45d62f72335fdbb56671561512
Author: Jason Gustafson 
Date:   2015-11-08T19:38:50Z

KAFKA-2723: new consumer exception cleanup (0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #452 from hachikuji/KAFKA-2723

commit 32cd3e35f1ea8251a51860cc48a44fb2fbfd7c0e
Author: Jason Gustafson 
Date:   2015-11-08T20:36:42Z

HOTFIX: fix group coordinator edge cases around metadata storage callback 
(0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #453 from hachikuji/hotfix-group-coordinator-0.9

commit 1fd79f57b4a73308c59b797974086ca09af19b98
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T04:41:35Z

KAFKA-2480: Handle retriable and non-retriable exceptions thrown by sink 
tasks.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #450 from ewencp/kafka-2480-unrecoverable-task-errors

(cherry picked from commit f4b87deefecf4902992a84d4a3fe3b99a94ff72b)
Signed-off-by: Gwen Shapira 

commit 48013222fd426685d2907a760290d2e7c7d25aea
Author: Geoff Anderson 
Date:   2015-11-09T04:52:16Z

KAFKA-2773; 0.9.0 branch)Fixed broken vagrant provision scripts for static 
zk/broker cluster

Author: Geoff Anderson 

Reviewers: Gwen Shapira

Closes #455 from granders/KAFKA-2773-0.9.0-vagrant-fix

commit 417e283d643d8865aa3e79dffa373c8cc853d78f
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T06:11:03Z

KAFKA-2774: Rename Copycat to Kafka Connect

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #456 from ewencp/kafka-2774-rename-copycat

(cherry picked from commit f2031d40639ef34c1591c22971394ef41c87652c)
Signed-off-by: Gwen Shapira 

commit 02fbdaa4475fd12a0fdccaa103bf27cbc1bfd077
Author: Rajini Sivaram 
Date:   2015-11-09T15:23:47Z

KAFKA-2779; Close SSL socket channel on remote connection close

Close socket channel in finally block to avoid file descriptor leak when 
remote end closes the connection

Author: Rajini Sivaram 

Reviewers: Ismael Juma , Jun Rao 

Closes #460 from rajinisivaram/KAFKA-2779

(cherry picked from commit efbebc6e843850b7ed9a1d015413c99f114a7d92)
Signed-off-by: Jun Rao 

commit fdefef9536acf8569607a980a25237ef4794f645
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T17:10:20Z

KAFKA-2781; Only require signing artifacts when uploading archives.

Author: Ewen Cheslack-Postava 

Reviewers: Jun Rao 

Closes #461 from ewencp/kafka-2781-no-signing-for-install

(cherry picked from commit a24f9a23a6d8759538e91072e8d96d158d03bb63)
Signed-off-by: Jun Rao 

commit 7471394c5485a2114d35c6345d95e161a0ee6586
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:19:27Z

KAFKA-2776: Fix lookup of schema conversion cache size in JsonConverter.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #458 from ewencp/kafka-2776-json-converter-cache-config-fix

(cherry picked from commit e9fc7b8c84908ae642339a2522a79f8bb5155728)
Signed-off-by: Gwen Shapira 

commit 3aa3e85d942b514cbe842a6b3c3fe214c0ecf401
Author: Jason Gustafson 
Date:   2015-11-09T18:26:17Z

HOTFIX: bug updating cache when loading group metadata

The bug causes only the first instance of group metadata in the topic to be 
written to the cache (because of the putIfNotExists in addGroup). Coordinator 
fail-over won't work properly unless the cache is loaded with the right 
metadata.

Author: Jason Gustafson 

 

[jira] [Created] (KAFKA-3448) IPV6 Regex is missing % character

2016-03-22 Thread Soumyajit Sahu (JIRA)
Soumyajit Sahu created KAFKA-3448:
-

 Summary: IPV6 Regex is missing % character
 Key: KAFKA-3448
 URL: https://issues.apache.org/jira/browse/KAFKA-3448
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0
 Environment: Windows,Linux
Reporter: Soumyajit Sahu


IPV6 addresses could have the % character.
When an address is written textually, the zone index is appended to the 
address, separated by a percent sign (%).
Reference: https://en.wikipedia.org/wiki/IPv6_address

Example: Link-local IPv6 Address . . . . . : 
fe80::b1da:69ca:57f7:63d8%3(Preferred)

Then, the broker would throw the IllegalStateException(s"connectionId has 
unexpected format: $connectionId")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3431) Remove `o.a.k.common.BrokerEndPoint` in favour of `Node`

2016-03-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3431:
---
Description: No point in having two classes that are basically the same. 
The former was introduced during the 0.10.0.0 development cycle so it can be 
removed.  (was: As per the following comment, we should move `BrokerEndPoint` 
from `common` to `common.internals` as it's not public API.

https://issues.apache.org/jira/browse/KAFKA-2970?focusedCommentId=15157821=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15157821)

> Remove `o.a.k.common.BrokerEndPoint` in favour of `Node`
> 
>
> Key: KAFKA-3431
> URL: https://issues.apache.org/jira/browse/KAFKA-3431
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> No point in having two classes that are basically the same. The former was 
> introduced during the 0.10.0.0 development cycle so it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3431) Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207755#comment-15207755
 ] 

ASF GitHub Bot commented on KAFKA-3431:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/1102


> Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`
> -
>
> Key: KAFKA-3431
> URL: https://issues.apache.org/jira/browse/KAFKA-3431
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As per the following comment, we should move `BrokerEndPoint` from `common` 
> to `common.internals` as it's not public API.
> https://issues.apache.org/jira/browse/KAFKA-2970?focusedCommentId=15157821=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15157821



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3431) Remove `o.a.k.common.BrokerEndPoint` in favour of `Node`

2016-03-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3431:
---
Summary: Remove `o.a.k.common.BrokerEndPoint` in favour of `Node`  (was: 
Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`)

> Remove `o.a.k.common.BrokerEndPoint` in favour of `Node`
> 
>
> Key: KAFKA-3431
> URL: https://issues.apache.org/jira/browse/KAFKA-3431
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As per the following comment, we should move `BrokerEndPoint` from `common` 
> to `common.internals` as it's not public API.
> https://issues.apache.org/jira/browse/KAFKA-2970?focusedCommentId=15157821=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15157821



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3431; Move `BrokerEndPoint` from `o.a.k....

2016-03-22 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/1102


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3431) Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`

2016-03-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3431:

   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1105
[https://github.com/apache/kafka/pull/1105]

> Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`
> -
>
> Key: KAFKA-3431
> URL: https://issues.apache.org/jira/browse/KAFKA-3431
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As per the following comment, we should move `BrokerEndPoint` from `common` 
> to `common.internals` as it's not public API.
> https://issues.apache.org/jira/browse/KAFKA-2970?focusedCommentId=15157821=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15157821



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3431) Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207743#comment-15207743
 ] 

ASF GitHub Bot commented on KAFKA-3431:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1105


> Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`
> -
>
> Key: KAFKA-3431
> URL: https://issues.apache.org/jira/browse/KAFKA-3431
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As per the following comment, we should move `BrokerEndPoint` from `common` 
> to `common.internals` as it's not public API.
> https://issues.apache.org/jira/browse/KAFKA-2970?focusedCommentId=15157821=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15157821



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3431: Remove `o.a.k.common.BrokerEndPoin...

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1105


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3432) Cluster.update() thread-safety

2016-03-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3432:
---
Reviewer: Guozhang Wang
  Status: Patch Available  (was: Open)

> Cluster.update() thread-safety
> --
>
> Key: KAFKA-3432
> URL: https://issues.apache.org/jira/browse/KAFKA-3432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> A `Cluster.update()` method was introduced during the development of 0.10.0 
> so that `StreamPartitionAssignor` can add internal topics on-the-fly and give 
> the augmented metadata to its underlying grouper.
> `Cluster` was supposed to be immutable after construction and all 
> synchronization happens via the `Metadata` instance. As far as I can see 
> `Cluster.update()` is not thread-safe even though `Cluster` is accessed by 
> multiple threads in some cases (I am not sure about the Streams case). Since 
> this is a public API, it is important to fix this in my opinion.
> A few options I can think of:
> * Since `PartitionAssignor` is an internal class, change 
> `PartitionAssignor.assign` to return a class containing the assignments and 
> optionally an updated cluster. This is straightforward, but I am not sure if 
> it's good enough for the Streams use-case. Can you please confirm [~guozhang]?
> * Pass `Metadata` instead of `Cluster` to `PartitionAssignor.assign`, giving 
> assignors the ability to update the metadata as needed.
> * Make `Cluster` thread-safe in the face of mutations (without relying on 
> synchronization at the `Metadata` level). This is not ideal, KAFKA-3428 shows 
> that the synchronization at `Metadata` level is already too costly for high 
> concurrency situations.
> Thoughts [~guozhang], [~hachikuji]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3432) Cluster.update() thread-safety

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207715#comment-15207715
 ] 

ASF GitHub Bot commented on KAFKA-3432:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1118

KAFKA-3432; Cluster.update() thread-safety

Replace `update` with `withPartitions`, which returns a
copy instead of mutating the instance.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3432-cluster-update-thread-safety

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1118


commit c76566f4cfeb25a757c223f63e13d81e33806d68
Author: Ismael Juma 
Date:   2016-03-23T01:47:30Z

KAFKA-3432; Cluster.update() thread-safety

Replace `update` with `withPartitions` that returns a
copy instead of mutating the instance.




> Cluster.update() thread-safety
> --
>
> Key: KAFKA-3432
> URL: https://issues.apache.org/jira/browse/KAFKA-3432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> A `Cluster.update()` method was introduced during the development of 0.10.0 
> so that `StreamPartitionAssignor` can add internal topics on-the-fly and give 
> the augmented metadata to its underlying grouper.
> `Cluster` was supposed to be immutable after construction and all 
> synchronization happens via the `Metadata` instance. As far as I can see 
> `Cluster.update()` is not thread-safe even though `Cluster` is accessed by 
> multiple threads in some cases (I am not sure about the Streams case). Since 
> this is a public API, it is important to fix this in my opinion.
> A few options I can think of:
> * Since `PartitionAssignor` is an internal class, change 
> `PartitionAssignor.assign` to return a class containing the assignments and 
> optionally an updated cluster. This is straightforward, but I am not sure if 
> it's good enough for the Streams use-case. Can you please confirm [~guozhang]?
> * Pass `Metadata` instead of `Cluster` to `PartitionAssignor.assign`, giving 
> assignors the ability to update the metadata as needed.
> * Make `Cluster` thread-safe in the face of mutations (without relying on 
> synchronization at the `Metadata` level). This is not ideal, KAFKA-3428 shows 
> that the synchronization at `Metadata` level is already too costly for high 
> concurrency situations.
> Thoughts [~guozhang], [~hachikuji]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3432; Cluster.update() thread-safety

2016-03-22 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1118

KAFKA-3432; Cluster.update() thread-safety

Replace `update` with `withPartitions`, which returns a
copy instead of mutating the instance.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3432-cluster-update-thread-safety

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1118


commit c76566f4cfeb25a757c223f63e13d81e33806d68
Author: Ismael Juma 
Date:   2016-03-23T01:47:30Z

KAFKA-3432; Cluster.update() thread-safety

Replace `update` with `withPartitions` that returns a
copy instead of mutating the instance.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3447) partitionState in UpdateMetadataRequest not logged properly state-change log

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207670#comment-15207670
 ] 

ASF GitHub Bot commented on KAFKA-3447:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1117


> partitionState in UpdateMetadataRequest not logged properly state-change log
> 
>
> Key: KAFKA-3447
> URL: https://issues.apache.org/jira/browse/KAFKA-3447
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> Saw the following in state-change log. The following logging comes from 
> MetadataCache.updateCache().
> TRACE Broker 0 cached leader info 
> org.apache.kafka.common.requests.UpdateMetadataRequest$PartitionState@6fc2c671
>  for partition test-0 in response to UpdateMetadata request sent by 
> controller 0 epoch 1 with correlation id 3 (state.change.logger)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3447) partitionState in UpdateMetadataRequest not logged properly state-change log

2016-03-22 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3447:
---
   Resolution: Fixed
Fix Version/s: 0.10.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1117
[https://github.com/apache/kafka/pull/1117]

> partitionState in UpdateMetadataRequest not logged properly state-change log
> 
>
> Key: KAFKA-3447
> URL: https://issues.apache.org/jira/browse/KAFKA-3447
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> Saw the following in state-change log. The following logging comes from 
> MetadataCache.updateCache().
> TRACE Broker 0 cached leader info 
> org.apache.kafka.common.requests.UpdateMetadataRequest$PartitionState@6fc2c671
>  for partition test-0 in response to UpdateMetadata request sent by 
> controller 0 epoch 1 with correlation id 3 (state.change.logger)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3447; partitionState in UpdateMetadataRe...

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1117


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Fallout from upgrading to kafka 0.9.0.0 from 0.8.2.1

2016-03-22 Thread Qi Xu
Hi folks, Rajiv, Jun,
I'd like to bring up this thread again from Rajiv Kurian 3 months ago.
Basically we did the same thing as Rajiv did. I upgraded two machines (out
of 10) from 0.8.2.1 to 0.9. SO after the upgrade, there will be 2 machines
in 0.9 and 8 machines in 0.8.2.1. And initially it all works fine. But
after about 2 hours, all old uploaders and consumers are broken due to no
leader found for all partitions of all topics. The producer just complains
"unknown error for topic xxx when it tries to refresh the metadata". And in
server side there's some error complaining no leader for a partition.
I'm wondering is there any known issue about 0.9 and 0.8.2 co-existing
version in the same cluster? Thanks a lot.


Below is the original thread:

We had to revert to 0.8.3 because three of our topics seem to have gotten
corrupted during the upgrade. As soon as we did the upgrade producers to
the three topics I mentioned stopped being able to do writes. The clients
complained (occasionally) about leader not found exceptions. We restarted
our clients and brokers but that didn't seem to help. Actually even after
reverting to 0.8.3 these three topics were broken. To fix it we had to stop
all clients, delete the topics, create them again and then restart the
clients.

I realize this is not a lot of info. I couldn't wait to get more debug info
because the cluster was actually being used. Has any one run into something
like this? Are there any known issues with old consumers/producers. The
topics that got busted had clients writing to them using the old Java
wrapper over the Scala producer.

Here are the steps I took to upgrade.

For each broker:

1. Stop the broker.
2. Restart with the *0.9* broker running with
inter.broker.protocol.version=*0.8.2*.X
3. Wait for under replicated partitions to go down to 0.
4. Go to step 1.
Once all the brokers were running the *0.9* code with
inter.broker.protocol.version=*0.8.2*.X we restarted them one by one with
inter.broker.protocol.version=0.9.0.0

When reverting I did the following.

For each broker.

1. Stop the broker.
2. Restart with the *0.9* broker running with
inter.broker.protocol.version=*0.8.2*.X
3. Wait for under replicated partitions to go down to 0.
4. Go to step 1.

Once all the brokers were running *0.9* code with
inter.broker.protocol.version=*0.8.2*.X  I restarted them one by one with
the
0.8.2.3 broker code. This however like I mentioned did not fix the three
broken topics.


[jira] [Commented] (KAFKA-3432) Cluster.update() thread-safety

2016-03-22 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207622#comment-15207622
 ] 

Ismael Juma commented on KAFKA-3432:


That sounds better [~guozhang], I'll give it a try.

> Cluster.update() thread-safety
> --
>
> Key: KAFKA-3432
> URL: https://issues.apache.org/jira/browse/KAFKA-3432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> A `Cluster.update()` method was introduced during the development of 0.10.0 
> so that `StreamPartitionAssignor` can add internal topics on-the-fly and give 
> the augmented metadata to its underlying grouper.
> `Cluster` was supposed to be immutable after construction and all 
> synchronization happens via the `Metadata` instance. As far as I can see 
> `Cluster.update()` is not thread-safe even though `Cluster` is accessed by 
> multiple threads in some cases (I am not sure about the Streams case). Since 
> this is a public API, it is important to fix this in my opinion.
> A few options I can think of:
> * Since `PartitionAssignor` is an internal class, change 
> `PartitionAssignor.assign` to return a class containing the assignments and 
> optionally an updated cluster. This is straightforward, but I am not sure if 
> it's good enough for the Streams use-case. Can you please confirm [~guozhang]?
> * Pass `Metadata` instead of `Cluster` to `PartitionAssignor.assign`, giving 
> assignors the ability to update the metadata as needed.
> * Make `Cluster` thread-safe in the face of mutations (without relying on 
> synchronization at the `Metadata` level). This is not ideal, KAFKA-3428 shows 
> that the synchronization at `Metadata` level is already too costly for high 
> concurrency situations.
> Thoughts [~guozhang], [~hachikuji]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3447) partitionState in UpdateMetadataRequest not logged properly state-change log

2016-03-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3447:
---
Reviewer: Jun Rao
  Status: Patch Available  (was: Open)

> partitionState in UpdateMetadataRequest not logged properly state-change log
> 
>
> Key: KAFKA-3447
> URL: https://issues.apache.org/jira/browse/KAFKA-3447
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> Saw the following in state-change log. The following logging comes from 
> MetadataCache.updateCache().
> TRACE Broker 0 cached leader info 
> org.apache.kafka.common.requests.UpdateMetadataRequest$PartitionState@6fc2c671
>  for partition test-0 in response to UpdateMetadata request sent by 
> controller 0 epoch 1 with correlation id 3 (state.change.logger)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3447) partitionState in UpdateMetadataRequest not logged properly state-change log

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207568#comment-15207568
 ] 

ASF GitHub Bot commented on KAFKA-3447:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1117

KAFKA-3447; partitionState in UpdateMetadataRequest not logged properly 
state-change log



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-3447-metadata-cache-logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1117.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1117






> partitionState in UpdateMetadataRequest not logged properly state-change log
> 
>
> Key: KAFKA-3447
> URL: https://issues.apache.org/jira/browse/KAFKA-3447
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> Saw the following in state-change log. The following logging comes from 
> MetadataCache.updateCache().
> TRACE Broker 0 cached leader info 
> org.apache.kafka.common.requests.UpdateMetadataRequest$PartitionState@6fc2c671
>  for partition test-0 in response to UpdateMetadata request sent by 
> controller 0 epoch 1 with correlation id 3 (state.change.logger)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3447; partitionState in UpdateMetadataRe...

2016-03-22 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1117

KAFKA-3447; partitionState in UpdateMetadataRequest not logged properly 
state-change log



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-3447-metadata-cache-logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1117.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1117






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #471

2016-03-22 Thread Apache Jenkins Server
See 



[jira] [Assigned] (KAFKA-3447) partitionState in UpdateMetadataRequest not logged properly state-change log

2016-03-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3447:
--

Assignee: Ismael Juma

> partitionState in UpdateMetadataRequest not logged properly state-change log
> 
>
> Key: KAFKA-3447
> URL: https://issues.apache.org/jira/browse/KAFKA-3447
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> Saw the following in state-change log. The following logging comes from 
> MetadataCache.updateCache().
> TRACE Broker 0 cached leader info 
> org.apache.kafka.common.requests.UpdateMetadataRequest$PartitionState@6fc2c671
>  for partition test-0 in response to UpdateMetadata request sent by 
> controller 0 epoch 1 with correlation id 3 (state.change.logger)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3447) partitionState in UpdateMetadataRequest not logged properly state-change log

2016-03-22 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-3447:
--

 Summary: partitionState in UpdateMetadataRequest not logged 
properly state-change log
 Key: KAFKA-3447
 URL: https://issues.apache.org/jira/browse/KAFKA-3447
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.10.0.0
Reporter: Jun Rao


Saw the following in state-change log. The following logging comes from 
MetadataCache.updateCache().

TRACE Broker 0 cached leader info 
org.apache.kafka.common.requests.UpdateMetadataRequest$PartitionState@6fc2c671 
for partition test-0 in response to UpdateMetadata request sent by controller 0 
epoch 1 with correlation id 3 (state.change.logger)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-51 - List Connectors REST API

2016-03-22 Thread Neha Narkhede
+1 (binding)

On Tue, Mar 22, 2016 at 3:56 PM, Liquan Pei  wrote:

> +1
>
> On Tue, Mar 22, 2016 at 3:54 PM, Gwen Shapira  wrote:
>
> > +1
> >
> > Straight forward enough and can't possibly break anything.
> >
> > On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava <
> e...@confluent.io>
> > wrote:
> >
> > > Since it's pretty minimal, we'd like to squeeze it into 0.10 if
> possible,
> > > and VOTE threads take 3 days, it was suggested it might make sense to
> > just
> > > kick off voting on this KIP immediately (and restart it if someone
> raises
> > > an issue). Feel free to object and comment in the DISCUSS thread if you
> > > feel there's something to still be discussed.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
> > >
> > > I'll obviously kick things off with a +1.
> > >
> > > -Ewen
> > >
> >
>
>
>
> --
> Liquan Pei
> Department of Physics
> University of Massachusetts Amherst
>



-- 
Thanks,
Neha


Jenkins build is back to normal : kafka-trunk-jdk7 #1138

2016-03-22 Thread Apache Jenkins Server
See 



[jira] [Assigned] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2016-03-22 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-3177:
--

Assignee: Jason Gustafson  (was: Jiangjie Qin)

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-51 - List Connectors REST API

2016-03-22 Thread Liquan Pei
+1

On Tue, Mar 22, 2016 at 3:54 PM, Gwen Shapira  wrote:

> +1
>
> Straight forward enough and can't possibly break anything.
>
> On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava 
> wrote:
>
> > Since it's pretty minimal, we'd like to squeeze it into 0.10 if possible,
> > and VOTE threads take 3 days, it was suggested it might make sense to
> just
> > kick off voting on this KIP immediately (and restart it if someone raises
> > an issue). Feel free to object and comment in the DISCUSS thread if you
> > feel there's something to still be discussed.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
> >
> > I'll obviously kick things off with a +1.
> >
> > -Ewen
> >
>



-- 
Liquan Pei
Department of Physics
University of Massachusetts Amherst


Re: [VOTE] KIP-51 - List Connectors REST API

2016-03-22 Thread Gwen Shapira
+1

Straight forward enough and can't possibly break anything.

On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava 
wrote:

> Since it's pretty minimal, we'd like to squeeze it into 0.10 if possible,
> and VOTE threads take 3 days, it was suggested it might make sense to just
> kick off voting on this KIP immediately (and restart it if someone raises
> an issue). Feel free to object and comment in the DISCUSS thread if you
> feel there's something to still be discussed.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
>
> I'll obviously kick things off with a +1.
>
> -Ewen
>


Re: [VOTE] KIP-51 - List Connectors REST API

2016-03-22 Thread Jason Gustafson
+1 (non-binding)

On Tue, Mar 22, 2016 at 3:46 PM, Ewen Cheslack-Postava 
wrote:

> Since it's pretty minimal, we'd like to squeeze it into 0.10 if possible,
> and VOTE threads take 3 days, it was suggested it might make sense to just
> kick off voting on this KIP immediately (and restart it if someone raises
> an issue). Feel free to object and comment in the DISCUSS thread if you
> feel there's something to still be discussed.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
>
> I'll obviously kick things off with a +1.
>
> -Ewen
>


[VOTE] KIP-51 - List Connectors REST API

2016-03-22 Thread Ewen Cheslack-Postava
Since it's pretty minimal, we'd like to squeeze it into 0.10 if possible,
and VOTE threads take 3 days, it was suggested it might make sense to just
kick off voting on this KIP immediately (and restart it if someone raises
an issue). Feel free to object and comment in the DISCUSS thread if you
feel there's something to still be discussed.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API

I'll obviously kick things off with a +1.

-Ewen


Build failed in Jenkins: kafka-trunk-jdk8 #470

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3435: Follow up to fix checkstyle

--
[...truncated 1621 lines...]

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 

[jira] [Commented] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2016-03-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207431#comment-15207431
 ] 

Jiangjie Qin commented on KAFKA-3177:
-

[~hachikuji] It would be great if you can help. I probably won't be able to 
work on this in a couple of days. Thanks a lot for help.

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-0.10-jdk7 #4

2016-03-22 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3301) CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207396#comment-15207396
 ] 

ASF GitHub Bot commented on KAFKA-3301:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1114


> CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC  is incorrect
> --
>
> Key: KAFKA-3301
> URL: https://issues.apache.org/jira/browse/KAFKA-3301
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> The text says "The number of samples maintained to compute metrics.", which 
> is in correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3301) CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect

2016-03-22 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3301:
---
   Resolution: Fixed
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1114
[https://github.com/apache/kafka/pull/1114]

> CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC  is incorrect
> --
>
> Key: KAFKA-3301
> URL: https://issues.apache.org/jira/browse/KAFKA-3301
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Grant Henke
> Fix For: 0.10.0.0
>
>
> The text says "The number of samples maintained to compute metrics.", which 
> is in correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3301: CommonClientConfigs.METRICS_SAMPLE...

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1114


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #469

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3426; Improve protocol type errors when invalid sizes are

[wangguoz] KAFKA-3319: improve session timeout broker/client config 
documentation

[cshapi] KAFKA-3219: Fix long topic name validation

--
[...truncated 3147 lines...]
kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED


Re: [DISCUSS] KIP-51 - List Connectors REST API

2016-03-22 Thread Ewen Cheslack-Postava
Yeah, we can do that. Not sure what we might want to add there, but makes
sense to keep things flexible. Updated the KIP text to reflect this.

-Ewen

On Tue, Mar 22, 2016 at 2:51 PM, Jason Gustafson  wrote:

> Hey Ewen,
>
> Just a quick question. It looks like we're returning a simple array
> containing the classnames. Would it make sense to return a set of objects
> instead? For example:
>
> [
>   { "class": "org.apache.kafka.connect.file.FileStreamSourceConnector"},
>   { "class": "org.apache.kafka.connect.file.FileStreamSinkConnector" }
> ]
>
> Then we'd be able to include additional fields later without breaking
> compatibility. Other than that, it makes sense to me.
>
> -Jason
>
> On Tue, Mar 22, 2016 at 2:35 PM, Ewen Cheslack-Postava 
> wrote:
>
> > Hi all,
> >
> > It was pointed out that we've been playing a bit fast and loose with API
> > additions in Kafka. I think it's worth discussing a lighter weight
> process
> > for APIs that are still marked unstable, but for the time being we'll add
> > KIPs before adjusting these APIs.
> >
> > To that end, I'd like to discuss (and hopefully quickly move to a vote)
> an
> > API to list connector classes. Here's the KIP:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
> >
> > This is a very small addition and already has a patch prepared.
> >
> > -Ewen
> >
>



-- 
Thanks,
Ewen


Re: [DISCUSS] KIP-51 - List Connectors REST API

2016-03-22 Thread Jason Gustafson
Hey Ewen,

Just a quick question. It looks like we're returning a simple array
containing the classnames. Would it make sense to return a set of objects
instead? For example:

[
  { "class": "org.apache.kafka.connect.file.FileStreamSourceConnector"},
  { "class": "org.apache.kafka.connect.file.FileStreamSinkConnector" }
]

Then we'd be able to include additional fields later without breaking
compatibility. Other than that, it makes sense to me.

-Jason

On Tue, Mar 22, 2016 at 2:35 PM, Ewen Cheslack-Postava 
wrote:

> Hi all,
>
> It was pointed out that we've been playing a bit fast and loose with API
> additions in Kafka. I think it's worth discussing a lighter weight process
> for APIs that are still marked unstable, but for the time being we'll add
> KIPs before adjusting these APIs.
>
> To that end, I'd like to discuss (and hopefully quickly move to a vote) an
> API to list connector classes. Here's the KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
>
> This is a very small addition and already has a patch prepared.
>
> -Ewen
>


[DISCUSS] KIP-51 - List Connectors REST API

2016-03-22 Thread Ewen Cheslack-Postava
Hi all,

It was pointed out that we've been playing a bit fast and loose with API
additions in Kafka. I think it's worth discussing a lighter weight process
for APIs that are still marked unstable, but for the time being we'll add
KIPs before adjusting these APIs.

To that end, I'd like to discuss (and hopefully quickly move to a vote) an
API to list connector classes. Here's the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API

This is a very small addition and already has a patch prepared.

-Ewen


Build failed in Jenkins: kafka-trunk-jdk7 #1137

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3319: improve session timeout broker/client config 
documentation

[cshapi] KAFKA-3219: Fix long topic name validation

--
[...truncated 3517 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset 

[jira] [Commented] (KAFKA-3435) Remove `Unstable` annotation from new Java Consumer

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207319#comment-15207319
 ] 

ASF GitHub Bot commented on KAFKA-3435:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1116


> Remove `Unstable` annotation from new Java Consumer
> ---
>
> Key: KAFKA-3435
> URL: https://issues.apache.org/jira/browse/KAFKA-3435
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of the vote for "KIP-45 - Standardize all client sequence interaction 
> on j.u.Collection", the underlying assumption is that we won't break things 
> going forward. We should remove the `Unstable` annotation to make that clear.
> cc [~hachikuji]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3435: Follow up to fix checkstyle

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1116


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3435) Remove `Unstable` annotation from new Java Consumer

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207315#comment-15207315
 ] 

ASF GitHub Bot commented on KAFKA-3435:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/1116

KAFKA-3435: Follow up to fix checkstyle



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-3435-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1116.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1116


commit 0c11ec7b982fd4ff2db715ff705b3ad0f6e3efcf
Author: Ewen Cheslack-Postava 
Date:   2016-03-22T21:06:37Z

KAFKA-3435: Follow up to fix checkstyle




> Remove `Unstable` annotation from new Java Consumer
> ---
>
> Key: KAFKA-3435
> URL: https://issues.apache.org/jira/browse/KAFKA-3435
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of the vote for "KIP-45 - Standardize all client sequence interaction 
> on j.u.Collection", the underlying assumption is that we won't break things 
> going forward. We should remove the `Unstable` annotation to make that clear.
> cc [~hachikuji]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3435: Follow up to fix checkstyle

2016-03-22 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/1116

KAFKA-3435: Follow up to fix checkstyle



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-3435-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1116.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1116


commit 0c11ec7b982fd4ff2db715ff705b3ad0f6e3efcf
Author: Ewen Cheslack-Postava 
Date:   2016-03-22T21:06:37Z

KAFKA-3435: Follow up to fix checkstyle




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10-jdk7 #3

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3426; Improve protocol type errors when invalid sizes are 
received

--
[...truncated 5446 lines...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > 

Build failed in Jenkins: kafka-trunk-jdk7 #1136

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3435: Remove `Unstable` annotation from new Java Consumer

[cshapi] MINOR: Remove the very misleading comment lines

[cshapi] KAFKA-3426; Improve protocol type errors when invalid sizes are 
received

--
[...truncated 5407 lines...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > 

[jira] [Commented] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207240#comment-15207240
 ] 

ASF GitHub Bot commented on KAFKA-3409:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1115

KAFKA-3409: handle CommitFailedException in MirrorMaker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3409

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1115.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1115


commit 07c4ad950cd29326c6db2309ca40a33040c783c6
Author: Jason Gustafson 
Date:   2016-03-22T20:34:23Z

KAFKA-3409: handle CommitFailedException in MirrorMaker




> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3409: handle CommitFailedException in Mi...

2016-03-22 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1115

KAFKA-3409: handle CommitFailedException in MirrorMaker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3409

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1115.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1115


commit 07c4ad950cd29326c6db2309ca40a33040c783c6
Author: Jason Gustafson 
Date:   2016-03-22T20:34:23Z

KAFKA-3409: handle CommitFailedException in MirrorMaker




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2016-03-22 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207225#comment-15207225
 ] 

Jason Gustafson commented on KAFKA-3177:


[~becket_qin] Were you planning to submit a patch for this? I can pick it up if 
you don't have time.

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #468

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3435: Remove `Unstable` annotation from new Java Consumer

[cshapi] MINOR: Remove the very misleading comment lines

--
[...truncated 5378 lines...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED


[jira] [Updated] (KAFKA-3219) Long topic names mess up broker topic state

2016-03-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3219:

   Resolution: Fixed
Fix Version/s: 0.10.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 898
[https://github.com/apache/kafka/pull/898]

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3219) Long topic names mess up broker topic state

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207185#comment-15207185
 ] 

ASF GitHub Bot commented on KAFKA-3219:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/898


> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3319) Improve session timeout broker and client configuration documentation

2016-03-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3319.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 1106
[https://github.com/apache/kafka/pull/1106]

> Improve session timeout broker and client configuration documentation
> -
>
> Key: KAFKA-3319
> URL: https://issues.apache.org/jira/browse/KAFKA-3319
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> The current descriptions of the consumer's session timeout and the broker's 
> group min and max session timeouts are very matter-of-fact: they define 
> exactly what the configuration is and nothing else. We should provide more 
> detail about why these settings exist and why a user might need to change 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3219: Fix long topic name validation

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/898


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3319) Improve session timeout broker and client configuration documentation

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207182#comment-15207182
 ] 

ASF GitHub Bot commented on KAFKA-3319:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1106


> Improve session timeout broker and client configuration documentation
> -
>
> Key: KAFKA-3319
> URL: https://issues.apache.org/jira/browse/KAFKA-3319
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> The current descriptions of the consumer's session timeout and the broker's 
> group min and max session timeouts are very matter-of-fact: they define 
> exactly what the configuration is and nothing else. We should provide more 
> detail about why these settings exist and why a user might need to change 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10-jdk7 #2

2016-03-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3435: Remove `Unstable` annotation from new Java Consumer

[cshapi] MINOR: Remove the very misleading comment lines

--
[...truncated 3119 lines...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-22 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207142#comment-15207142
 ] 

Dana Powers commented on KAFKA-3442:


sounds good to me!

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207127#comment-15207127
 ] 

Jiangjie Qin commented on KAFKA-3442:
-

[~dana.powers] I think we all agree that having an explicit return code is 
better than inferring from a partial message. But that is a bigger change and 
might take some time. So as a quick fix we can return a partial message to 
maintain the current behavior.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3426) Improve protocol type errors when invalid sizes are received

2016-03-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3426:

   Resolution: Fixed
Fix Version/s: 0.10.0.0
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1100
[https://github.com/apache/kafka/pull/1100]

> Improve protocol type errors when invalid sizes are received
> 
>
> Key: KAFKA-3426
> URL: https://issues.apache.org/jira/browse/KAFKA-3426
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> We currently don't perform much validation on the size value read by the 
> protocol types. This means that we end up throwing exceptions like 
> `BufferUnderflowException`, `NegativeArraySizeException`, etc. `Schema.read` 
> catches these exceptions and adds some useful information like:
> {code}
> throw new SchemaException("Error reading field '" + fields[i].name +
>   "': " +
>   (e.getMessage() == null ? 
> e.getClass().getName() : e.getMessage()));
> {code}
> We could do even better by throwing a `SchemaException` with a more user 
> friendly message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3426) Improve protocol type errors when invalid sizes are received

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207121#comment-15207121
 ] 

ASF GitHub Bot commented on KAFKA-3426:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1100


> Improve protocol type errors when invalid sizes are received
> 
>
> Key: KAFKA-3426
> URL: https://issues.apache.org/jira/browse/KAFKA-3426
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> We currently don't perform much validation on the size value read by the 
> protocol types. This means that we end up throwing exceptions like 
> `BufferUnderflowException`, `NegativeArraySizeException`, etc. `Schema.read` 
> catches these exceptions and adds some useful information like:
> {code}
> throw new SchemaException("Error reading field '" + fields[i].name +
>   "': " +
>   (e.getMessage() == null ? 
> e.getClass().getName() : e.getMessage()));
> {code}
> We could do even better by throwing a `SchemaException` with a more user 
> friendly message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3426; Improve protocol type errors when ...

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1100


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Remove the very misleading comment line...

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/793


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3435) Remove `Unstable` annotation from new Java Consumer

2016-03-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3435:

   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1113
[https://github.com/apache/kafka/pull/1113]

> Remove `Unstable` annotation from new Java Consumer
> ---
>
> Key: KAFKA-3435
> URL: https://issues.apache.org/jira/browse/KAFKA-3435
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of the vote for "KIP-45 - Standardize all client sequence interaction 
> on j.u.Collection", the underlying assumption is that we won't break things 
> going forward. We should remove the `Unstable` annotation to make that clear.
> cc [~hachikuji]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3435) Remove `Unstable` annotation from new Java Consumer

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207030#comment-15207030
 ] 

ASF GitHub Bot commented on KAFKA-3435:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1113


> Remove `Unstable` annotation from new Java Consumer
> ---
>
> Key: KAFKA-3435
> URL: https://issues.apache.org/jira/browse/KAFKA-3435
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of the vote for "KIP-45 - Standardize all client sequence interaction 
> on j.u.Collection", the underlying assumption is that we won't break things 
> going forward. We should remove the `Unstable` annotation to make that clear.
> cc [~hachikuji]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-03-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206956#comment-15206956
 ] 

Jun Rao commented on KAFKA-3296:


[~thecoop1984], it's a bit weird that that the controller doesn't contain any 
entries on the  __consumer_offsets topic, which is used for determining the 
group coordinator. Could you run the topic command to describe  
__consumer_offsets and post the output?

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
> Attachments: controller.zip, kafkalogs.zip
>
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> 

[jira] [Commented] (KAFKA-3210) Using asynchronous calls through the raw ZK API in ZkUtils

2016-03-22 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206935#comment-15206935
 ] 

Flavio Junqueira commented on KAFKA-3210:
-

[~granthenke] my main goal here is to have access to the async API and have 
more control over what happens between sessions. I'm more used to programming 
against zookeeper directly, so I'm more inclined to pursue that direction. 
Also, given how ZkUtils is structured, it is not entirely clear to me how much 
we would be able to benefit from the recipes that curator can offer. Having 
said that, I don't have any major concern with using a different wrapper if 
this community prefers that option as long as we are able to make use of 
asynchronous calls and have more control over session creation.

> Using asynchronous calls through the raw ZK API in ZkUtils
> --
>
> Key: KAFKA-3210
> URL: https://issues.apache.org/jira/browse/KAFKA-3210
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, zkclient
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>
> We have observed a number of issues with the controller interaction with 
> ZooKeeper mainly because ZkClient creates new sessions transparently under 
> the hood. Creating sessions transparently enables, for example, old 
> controller to successfully update znodes in ZooKeeper even when they aren't 
> the controller any longer (e.g., KAFKA-3083). To fix this, we need to bypass 
> the ZkClient lib like we did with ZKWatchedEphemeral.
> In addition to fixing such races with the controller, it would improve 
> performance significantly if we used the async API (see KAFKA-3038). The 
> async API is more efficient because it pipelines the requests to ZooKeeper, 
> and the number of requests upon controller recovery can be large.
> This jira proposes to make these two changes to the calls in ZkUtils and to 
> do it, one path is to first replace the calls in ZkUtils with raw async ZK 
> calls and block so that we don't have to change the controller code in this 
> phase. Once this step is accomplished and it is stable, we make changes to 
> the controller to handle the asynchronous calls to ZooKeeper.
> Note that in the first step, we will need to introduce some new logic for 
> session management, which is currently handled entirely by ZkClient. We will 
> also need to implement the subscription mechanism for event notifications 
> (see ZooKeeperLeaderElector as a an exemple).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3367) Delete topic dont delete the complete log from kafka

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-3367.

Resolution: Not A Problem

> Delete topic dont delete the complete log from kafka
> 
>
> Key: KAFKA-3367
> URL: https://issues.apache.org/jira/browse/KAFKA-3367
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Akshath Patkar
>
> Delete topic Just marks the topic as deleted. But data still remain in logs.
> How can we delete the topic completely with out doing manual delete of logs 
> from kafka and zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-22 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206876#comment-15206876
 ] 

Dana Powers commented on KAFKA-3442:


[~junrao] Actually I meant adding the error code to v0 and v1 responses. I 
think that would be helpful if the response behavior changes to no longer 
include partial messages. But if behavior doesn't change for v0/v1 then agree 
it is probably better to defer to future release.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3417) Invalid characters in config properties not being validated?

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3417:
--

Assignee: Grant Henke

> Invalid characters in config properties not being validated?
> 
>
> Key: KAFKA-3417
> URL: https://issues.apache.org/jira/browse/KAFKA-3417
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.1
>Reporter: Byron Ruth
>Assignee: Grant Henke
>Priority: Minor
>
> I ran into an error using a {{client.id}} with invalid characters (per the 
> [config 
> validator|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/common/Config.scala#L25-L35]).
>  I was able to get that exact error using the {{kafka-console-consumer}} 
> script, presumably because I supplied a consumer properties file and it 
> validated prior to hitting the server. However, when I use a client library 
> (sarama for Go in this case), an error in the metrics subsystem is thrown 
> [here|https://github.com/apache/kafka/blob/977ebbe9bafb6c1a6e1be69620f745712118fe80/clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java#L380].
> The stacktrace is:
> {code:title=stack.java}
> [2016-03-17 17:43:47,342] ERROR [KafkaApi-0] error when handling request 
> Name: FetchRequest; Version: 0; CorrelationId: 2; ClientId: foo:bar; 
> ReplicaId: -1; MaxWait: 250 ms; MinBytes: 1 bytes; RequestInfo: [foo,0] -> 
> PartitionFetchInfo(0,32768) (kafka.server.KafkaApis)
> org.apache.kafka.common.KafkaException: Error creating mbean attribute for 
> metricName :MetricName [name=throttle-time, group=Fetch, description=Tracking 
> average throttle-time per client, tags={client-id=foo:bar}]
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:113)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> ...
> {code}
> Assuming the cause os related to the invalid characters, when the request 
> header is decoded, the {{clientId}} should be validated prior to being used?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3426) Improve protocol type errors when invalid sizes are received

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3426:
---
Affects Version/s: 0.9.0.0

> Improve protocol type errors when invalid sizes are received
> 
>
> Key: KAFKA-3426
> URL: https://issues.apache.org/jira/browse/KAFKA-3426
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> We currently don't perform much validation on the size value read by the 
> protocol types. This means that we end up throwing exceptions like 
> `BufferUnderflowException`, `NegativeArraySizeException`, etc. `Schema.read` 
> catches these exceptions and adds some useful information like:
> {code}
> throw new SchemaException("Error reading field '" + fields[i].name +
>   "': " +
>   (e.getMessage() == null ? 
> e.getClass().getName() : e.getMessage()));
> {code}
> We could do even better by throwing a `SchemaException` with a more user 
> friendly message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins job for 0.10.0

2016-03-22 Thread Gwen Shapira
Hi Team Kafka,

Just a quick update - I've created a Jenkins job to test the 0.10.0 branch,
to make sure we don't accidentally break it :)

I know our tests have been all over the place recently, so this may not be
as useful as it should be - but it is worth taking a look at the tests to
make sure we understand why they fail.

https://builds.apache.org/job/kafka-0.10-jdk7/

Gwen


[jira] [Comment Edited] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206808#comment-15206808
 ] 

Jiangjie Qin edited comment on KAFKA-3442 at 3/22/16 5:42 PM:
--

[~junrao] Yes, this is a known issue when the original patch was written. The 
reason I did not address that was because it only happens when clients are old. 
Do you think we should support that and form a partial message?


was (Author: becket_qin):
[~junrao] Yes, this is a known issue when the original patch was written. The 
reason I did not address that was because it is only happens when clients are 
old. Do you think we should support that and form a partial message?

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206867#comment-15206867
 ] 

Jun Rao commented on KAFKA-3442:


[~becket_qin], it's probably better if we keep supporting the current partial 
message behavior. If the non-java clients follow the same logic as the java 
client for dealing with partial messages, I was thinking that if the broker 
detects this case, it can just send a constant ByteBufferMessageSet that 
contains an arbitrary partial message.

[~dana.powers], it's probably a good idea for the broker to detect the partial 
message case and send an error in the response directly instead of having the 
client to guess this from the payload. I think this is possible since the 
broker already does a small scan to find the first message to return. We can 
just further check the size of the first message. It's too late to do this in 
0.10.0 though since this may require bumping the version of FetchRequest. 
[~ijuma], do you want to file a separate jira to track this?

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3301: CommonClientConfigs.METRICS_SAMPLE...

2016-03-22 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1114

KAFKA-3301: CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incor…

…rect

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka window-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1114.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1114


commit 0d64663754c801ad347098b6d321daa3c54d70a6
Author: Grant Henke 
Date:   2016-03-22T17:40:50Z

KAFKA-3301: CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3301) CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3301:
---
Status: Patch Available  (was: Open)

> CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC  is incorrect
> --
>
> Key: KAFKA-3301
> URL: https://issues.apache.org/jira/browse/KAFKA-3301
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> The text says "The number of samples maintained to compute metrics.", which 
> is in correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3301) CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206866#comment-15206866
 ] 

ASF GitHub Bot commented on KAFKA-3301:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/1114

KAFKA-3301: CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incor…

…rect

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka window-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1114.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1114


commit 0d64663754c801ad347098b6d321daa3c54d70a6
Author: Grant Henke 
Date:   2016-03-22T17:40:50Z

KAFKA-3301: CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect




> CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC  is incorrect
> --
>
> Key: KAFKA-3301
> URL: https://issues.apache.org/jira/browse/KAFKA-3301
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> The text says "The number of samples maintained to compute metrics.", which 
> is in correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-22 Thread Magnus Edenhill (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206838#comment-15206838
 ] 

Magnus Edenhill commented on KAFKA-3442:


librdkafka silently ignores partial messages (which are most often seen at the 
end of a messageset), so it should be improved for this case where it gets 
stuck on a single message.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-22 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206820#comment-15206820
 ] 

Jason Gustafson commented on KAFKA-3409:


[~singhashish] Thanks! The thing that's bugging me is that MM didn't shutdown 
after the exception was raised, so let me see if I can figure out what's going 
on there. If not, then you can probably just submit your patch.

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-22 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206800#comment-15206800
 ] 

Ashish K Singh commented on KAFKA-3409:
---

[~hachikuji] there is a trivial patch, 
https://github.com/SinghAsDev/kafka/commit/e411607b878df5becd9cead783768eede6e1fb9d.
 Feel free to take it.

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3301) CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC is incorrect

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3301:
--

Assignee: Grant Henke

> CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_DOC  is incorrect
> --
>
> Key: KAFKA-3301
> URL: https://issues.apache.org/jira/browse/KAFKA-3301
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> The text says "The number of samples maintained to compute metrics.", which 
> is in correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-22 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206790#comment-15206790
 ] 

Dana Powers commented on KAFKA-3442:


yes, kafka-python uses the same check for partial messages as the java client 
and will raise a RecordTooLargeException to user if there is only a partial 
message. I think it would be best if the 0.10 broker continued to return 
partial messages, but since the protocol spec refers to this behavior as an 
optimization then I think it would be acceptable to change the behavior and 
return empty payload. Would it be possible to include an error_code with the 
empty payload?

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3438) Rack Aware Replica Reassignment should warn of overloaded brokers

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3438:
---
Fix Version/s: 0.10.0.1

> Rack Aware Replica Reassignment should warn of overloaded brokers
> -
>
> Key: KAFKA-3438
> URL: https://issues.apache.org/jira/browse/KAFKA-3438
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ben Stopford
> Fix For: 0.10.0.1
>
>
> We've changed the replica reassignment code to be rack aware.
> One problem that might catch users out would be that they rebalance the 
> cluster using kafka-reassign-partitions.sh but their rack configuration means 
> that some high proportion of replicas are pushed onto a single, or small 
> number of, brokers. 
> This should be an easy problem to avoid, by changing the rack assignment 
> information, but we should probably warn users if they are going to create 
> something that is unbalanced. 
> So imagine I have a Kafka cluster of 12 nodes spread over two racks with rack 
> awareness enabled. If I add a 13th machine, on a new rack, and run the 
> rebalance tool, that new machine will get ~6x as many replicas as the least 
> loaded broker. 
> Suggest a warning  be added to the tool output when --generate is called. 
> "The most loaded broker has 2.3x as many replicas as the the least loaded 
> broker. This is likely due to an uneven distribution of brokers across racks. 
> You're advised to alter the rack config so there are approximately the same 
> number of brokers per rack" and displays the individual rack→#brokers and 
> broker→#replicas data for the proposed move.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3438) Rack Aware Replica Reassignment should warn of overloaded brokers

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3438:
---
Affects Version/s: 0.10.0.0

> Rack Aware Replica Reassignment should warn of overloaded brokers
> -
>
> Key: KAFKA-3438
> URL: https://issues.apache.org/jira/browse/KAFKA-3438
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ben Stopford
> Fix For: 0.10.0.1
>
>
> We've changed the replica reassignment code to be rack aware.
> One problem that might catch users out would be that they rebalance the 
> cluster using kafka-reassign-partitions.sh but their rack configuration means 
> that some high proportion of replicas are pushed onto a single, or small 
> number of, brokers. 
> This should be an easy problem to avoid, by changing the rack assignment 
> information, but we should probably warn users if they are going to create 
> something that is unbalanced. 
> So imagine I have a Kafka cluster of 12 nodes spread over two racks with rack 
> awareness enabled. If I add a 13th machine, on a new rack, and run the 
> rebalance tool, that new machine will get ~6x as many replicas as the least 
> loaded broker. 
> Suggest a warning  be added to the tool output when --generate is called. 
> "The most loaded broker has 2.3x as many replicas as the the least loaded 
> broker. This is likely due to an uneven distribution of brokers across racks. 
> You're advised to alter the rack config so there are approximately the same 
> number of brokers per rack" and displays the individual rack→#brokers and 
> broker→#replicas data for the proposed move.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3437) We don't need sitedocs package for every scala version

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3437:
---
Affects Version/s: 0.9.0.0

> We don't need sitedocs package for every scala version
> --
>
> Key: KAFKA-3437
> URL: https://issues.apache.org/jira/browse/KAFKA-3437
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Minor
> Fix For: 0.10.0.1
>
>
> When running "./gradlew releaseTarGzAll - it generates a binary tarball for 
> every scala version we support (good!) and also sitedoc tarball for every 
> scala version we support (useless).
> Will be nice if we have a way to generate just one sitedoc tarball for our 
> release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3437) We don't need sitedocs package for every scala version

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3437:
---
Fix Version/s: 0.10.0.1

> We don't need sitedocs package for every scala version
> --
>
> Key: KAFKA-3437
> URL: https://issues.apache.org/jira/browse/KAFKA-3437
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Minor
> Fix For: 0.10.0.1
>
>
> When running "./gradlew releaseTarGzAll - it generates a binary tarball for 
> every scala version we support (good!) and also sitedoc tarball for every 
> scala version we support (useless).
> Will be nice if we have a way to generate just one sitedoc tarball for our 
> release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3437) We don't need sitedocs package for every scala version

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3437:
--

Assignee: Grant Henke

> We don't need sitedocs package for every scala version
> --
>
> Key: KAFKA-3437
> URL: https://issues.apache.org/jira/browse/KAFKA-3437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Minor
>
> When running "./gradlew releaseTarGzAll - it generates a binary tarball for 
> every scala version we support (good!) and also sitedoc tarball for every 
> scala version we support (useless).
> Will be nice if we have a way to generate just one sitedoc tarball for our 
> release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-22 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3409 started by Jason Gustafson.
--
> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3445) ConnectorConfig should validate TASKS_MAX_CONFIG's lower bound limit

2016-03-22 Thread Ryan P (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan P updated KAFKA-3445:
--
Status: Patch Available  (was: Open)

> ConnectorConfig should validate TASKS_MAX_CONFIG's lower bound limit 
> -
>
> Key: KAFKA-3445
> URL: https://issues.apache.org/jira/browse/KAFKA-3445
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: Ryan P
>Priority: Trivial
>  Labels: newbie
> Attachments: KAFKA-3445.patch
>
>
> I'll be the first to admit this is a bit nit picky any property marked with 
> Importance.HIGH should be guarded against nonsensical values. 
> With that said I would like to suggest that TASKS_MAX_CONFIG be validating 
> against a lower bound limit of 1. 
> I do understand this is unlikely to happen and the configuration is 
> nonsensical but there is no penalty for stopping someone from trying it out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-22 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3441:
---
Affects Version/s: 0.10.0.0

> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Gwen Shapira
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >