Build failed in Jenkins: kafka-trunk-jdk8 #204

2015-12-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2931: add system test for consumer rolling upgrades

--
[...truncated 6670 lines...]
org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSourceConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateSinkConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateAndStop PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runt

[GitHub] kafka pull request: 0.9.0

2015-12-03 Thread chamberlain1990
GitHub user chamberlain1990 opened a pull request:

https://github.com/apache/kafka/pull/627

0.9.0



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/kafka 0.9.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/627.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #627


commit 7710b367fd26a0c41565f35200748c23616b4477
Author: Gwen Shapira 
Date:   2015-11-07T03:46:30Z

Changing version to 0.9.0.0

commit 27d44afe664bff45d62f72335fdbb56671561512
Author: Jason Gustafson 
Date:   2015-11-08T19:38:50Z

KAFKA-2723: new consumer exception cleanup (0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #452 from hachikuji/KAFKA-2723

commit 32cd3e35f1ea8251a51860cc48a44fb2fbfd7c0e
Author: Jason Gustafson 
Date:   2015-11-08T20:36:42Z

HOTFIX: fix group coordinator edge cases around metadata storage callback 
(0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #453 from hachikuji/hotfix-group-coordinator-0.9

commit 1fd79f57b4a73308c59b797974086ca09af19b98
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T04:41:35Z

KAFKA-2480: Handle retriable and non-retriable exceptions thrown by sink 
tasks.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #450 from ewencp/kafka-2480-unrecoverable-task-errors

(cherry picked from commit f4b87deefecf4902992a84d4a3fe3b99a94ff72b)
Signed-off-by: Gwen Shapira 

commit 48013222fd426685d2907a760290d2e7c7d25aea
Author: Geoff Anderson 
Date:   2015-11-09T04:52:16Z

KAFKA-2773; 0.9.0 branch)Fixed broken vagrant provision scripts for static 
zk/broker cluster

Author: Geoff Anderson 

Reviewers: Gwen Shapira

Closes #455 from granders/KAFKA-2773-0.9.0-vagrant-fix

commit 417e283d643d8865aa3e79dffa373c8cc853d78f
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T06:11:03Z

KAFKA-2774: Rename Copycat to Kafka Connect

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #456 from ewencp/kafka-2774-rename-copycat

(cherry picked from commit f2031d40639ef34c1591c22971394ef41c87652c)
Signed-off-by: Gwen Shapira 

commit 02fbdaa4475fd12a0fdccaa103bf27cbc1bfd077
Author: Rajini Sivaram 
Date:   2015-11-09T15:23:47Z

KAFKA-2779; Close SSL socket channel on remote connection close

Close socket channel in finally block to avoid file descriptor leak when 
remote end closes the connection

Author: Rajini Sivaram 

Reviewers: Ismael Juma , Jun Rao 

Closes #460 from rajinisivaram/KAFKA-2779

(cherry picked from commit efbebc6e843850b7ed9a1d015413c99f114a7d92)
Signed-off-by: Jun Rao 

commit fdefef9536acf8569607a980a25237ef4794f645
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T17:10:20Z

KAFKA-2781; Only require signing artifacts when uploading archives.

Author: Ewen Cheslack-Postava 

Reviewers: Jun Rao 

Closes #461 from ewencp/kafka-2781-no-signing-for-install

(cherry picked from commit a24f9a23a6d8759538e91072e8d96d158d03bb63)
Signed-off-by: Jun Rao 

commit 7471394c5485a2114d35c6345d95e161a0ee6586
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:19:27Z

KAFKA-2776: Fix lookup of schema conversion cache size in JsonConverter.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #458 from ewencp/kafka-2776-json-converter-cache-config-fix

(cherry picked from commit e9fc7b8c84908ae642339a2522a79f8bb5155728)
Signed-off-by: Gwen Shapira 

commit 3aa3e85d942b514cbe842a6b3c3fe214c0ecf401
Author: Jason Gustafson 
Date:   2015-11-09T18:26:17Z

HOTFIX: bug updating cache when loading group metadata

The bug causes only the first instance of group metadata in the topic to be 
written to the cache (because of the putIfNotExists in addGroup). Coordinator 
fail-over won't work properly unless the cache is loaded with the right 
metadata.

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #462 from hachikuji/hotfix-group-loading

(cherry picked from commit 2b04004de878823fe631af1f3f85108c0b38caec)
Signed-off-by: Guozhang Wang 

commit e627558a5e62d185c88650af845d7b74e9c290f8
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:27:18Z

KAFKA-2775: Move exceptions into API package for Kafka Connect.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #457 from ewencp/kafka-2775-exceptions-in-api-package

(cherry picked from commit bc76e6704e8f14d59bb5d4fcf9bdf544c9e463bf)
Signed-off-by: Gwen Shapira 

commit 4069011ee4dc0f3500190bb93a3b79180cd34eda
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:3

Build failed in Jenkins: kafka-trunk-jdk7 #875

2015-12-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2931: add system test for consumer rolling upgrades

--
[...truncated 1834 lines...]

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.SslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup PASSED

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.OffsetCommitTe

[jira] [Commented] (KAFKA-2931) Consumer rolling upgrade test case

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15039764#comment-15039764
 ] 

ASF GitHub Bot commented on KAFKA-2931:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/619


> Consumer rolling upgrade test case
> --
>
> Key: KAFKA-2931
> URL: https://issues.apache.org/jira/browse/KAFKA-2931
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> We need a system test which covers the rolling upgrade process for the new 
> consumer. The idea is to start the consumers with a "range" assignment 
> strategy and then upgrade to "round-robin" without any down-time. This 
> validates the coordinator's protocol selection process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2931: add system test for consumer rolli...

2015-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/619


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #203

2015-12-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2905: System test for rolling upgrade to enable ZooKeeper ACLs

--
[...truncated 4535 lines...]
org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
org.junit.ComparisonFailure: expected: but was:
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apac

Jenkins build is back to normal : kafka-trunk-jdk7 #874

2015-12-03 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2945:
---
Status: Patch Available  (was: Open)

> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15039619#comment-15039619
 ] 

ASF GitHub Bot commented on KAFKA-2945:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/626

KAFKA-2945: CreateTopic - protocol and server side implementation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka create-wire

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/626.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #626


commit d1fe53ecb9ccdc94457efcb61332cd54ca7b8095
Author: Grant Henke 
Date:   2015-12-02T03:23:45Z

KAFKA-2945: CreateTopic - protocol and server side implementation




> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2945: CreateTopic - protocol and server ...

2015-12-03 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/626

KAFKA-2945: CreateTopic - protocol and server side implementation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka create-wire

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/626.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #626


commit d1fe53ecb9ccdc94457efcb61332cd54ca7b8095
Author: Grant Henke 
Date:   2015-12-02T03:23:45Z

KAFKA-2945: CreateTopic - protocol and server side implementation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-30 Allow for brokers to have plug-able consensus and meta data storage sub systems

2015-12-03 Thread Todd Palino
Come on, Jay. Anyone can get up in the morning and run if they have the
willpower :)

Granted I do have some bias here, since we have tooling in place that makes
deployments and monitoring easier. But even at that, I would not say
Zookeeper is difficult to run or monitor. I’m not denying that there are
complaints, but my experience has always been that complaints of that type
are either related more to the specific way the dependency is implemented
(that is, Kafka’s not using it correctly or is otherwise generating errors
that say “Zookeeper” in them), or it’s related to a bias against the
dependency (I don’t like Zookeeper, I have XYZ installed, etc.).

The point that for a small installation Zookeeper can represent a large
footprint is well made. I wonder how many of these people are being
ill-served by recommendations from people like me that you should not run
Kafka and Zookeeper on the same systems. Sure, we’d never do that at
LinkedIn just because we are looking for high performance and a few more
systems isn’t a big deal. But for a lower performance environment, it’s
really not a problem to colocate the applications.

As far as the controller goes, I’m perfectly willing to accept that my
desire to get rid of it is from my bias against it because of how many
problems we’ve run into with that code. We can probably both agree that the
controller code needs an overhaul regardless. It’s stood up well, but as
the clusters get larger it’s definitely shows cracks.

-Todd


On Thu, Dec 3, 2015 at 11:37 AM, Jay Kreps  wrote:

> Hey Todd,
>
> I actually agree on both counts.
>
> I would summarize the first comment as "Zookeeper is not hard to
> operationalize if you are Todd Palino"--also in that category of
> things that are not hard: running 13 miles at 5:00 am. Basically I
> totally agree that ZK is now a solved problem at LinkedIn. :-)
>
> Empirically, though, it is really hard for a lot of our users. It is
> one of the largest sources of problems we see in people's clusters. We
> could perhaps get part of the way by improving our zk usage and
> documentation, and it is certainly the case that we could potentially
> make things worse in trying to make them better, but I don't think
> that is the same as saying there isn't a problem.
>
> I totally agree with your second comment. In some sense what I was
> sketching out is just replacing ZK. But part of the design of Kafka
> was because we already had ZK. So there might be a way to further
> rationalize the metadata log and the data logs if you kind of went
> back to first principles and thought about it. I don't have any idea
> how, but I share that intuition.
>
> I do think having the controller, though, is quite useful. I think
> this pattern of avoiding many rounds of consensus by just doing one
> round to pick a leader is a good one. If you think about it Paxos =>
> Multi-paxos is basically optimizing by lumping together consensus
> rounds on a per message basis into a leader which then handles many
> messages, and what Kafka does is kind of like Multi-multi-paxos in
> that it lumps together many leadership elections into one central
> controller election which then picks all the leaders. In some ways
> having central decision makers seems inelegant (aren't we supposed to
> be distributed?) but it does allow you to be both very very fast in
> making lots of decisions (vs doing thousands of independent leadership
> elections) and also to do things that require global knowledge (like
> balancing leadership).
>
> Cheers,
>
> -Jay
>
>
>
> On Thu, Dec 3, 2015 at 10:05 AM, Todd Palino  wrote:
> > This kind of discussion always puts me in mind of stories that start “So
> I
> > wrote my own encryption. How hard can it be?” :)
> >
> > Joking aside, I do have a few thoughts on this. First I have to echo
> Joel’s
> > perspective on Zookeeper. Honestly, it is one of the few applications we
> > can forget about, so I have a hard time understanding pain around running
> > it. You set it up, and unless you have a hardware failure to deal with,
> > that’s it. Yes, there are ways to abusively use it, just like with any
> > application, but Kafka is definitely not one of those use cases. I also
> > disagree that it’s hard to share the ZK cluster safely. We do it all the
> > time. We share most often with other Kafka clusters, but we also share
> with
> > other applications. This goes back to the “abusive use pattern”. Yes, we
> > don’t share with them. Nobody should. Yes, it requires monitoring and you
> > have to configure it correctly. But configuration is a one-time charge
> > (both in terms of capital expenses and knowledge acquisition) and as
> > applications goes, is minimal.
> >
> > So that said, I also don’t think this is necessarily a bad idea, but this
> > is not the approach I would take. You’re picking off a low-level piece
> and
> > talking about reimplementing primitives. I think it is much better to
> start
> > at the top and have a discussion about where 

[jira] [Resolved] (KAFKA-2905) System test for rolling upgrade to enable ZooKeeper ACLs with SASL

2015-12-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2905.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

> System test for rolling upgrade to enable ZooKeeper ACLs with SASL
> --
>
> Key: KAFKA-2905
> URL: https://issues.apache.org/jira/browse/KAFKA-2905
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> Write a ducktape test to verify the ability of performing a rolling upgrade 
> to enable the use of secure ACLs and SASL with ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2905) System test for rolling upgrade to enable ZooKeeper ACLs with SASL

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15039567#comment-15039567
 ] 

ASF GitHub Bot commented on KAFKA-2905:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/598


> System test for rolling upgrade to enable ZooKeeper ACLs with SASL
> --
>
> Key: KAFKA-2905
> URL: https://issues.apache.org/jira/browse/KAFKA-2905
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>
> Write a ducktape test to verify the ability of performing a rolling upgrade 
> to enable the use of secure ACLs and SASL with ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2905: System test for rolling upgrade to...

2015-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/598


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1778) Create new re-elect controller admin function

2015-12-03 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038651#comment-15038651
 ] 

Grant Henke commented on KAFKA-1778:


I broke this into its own task since it's independently tracked by [KIP-39: 
Pinning controller to broker 
|https://cwiki.apache.org/confluence/display/KAFKA/KIP-39+Pinning+controller+to+broker].

> Create new re-elect controller admin function
> -
>
> Key: KAFKA-1778
> URL: https://issues.apache.org/jira/browse/KAFKA-1778
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joe Stein
>Assignee: Abhishek Nigam
>
> kafka --controller --elect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1778) Create new re-elect controller admin function

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1778:
---
Issue Type: New Feature  (was: Sub-task)
Parent: (was: KAFKA-1694)

> Create new re-elect controller admin function
> -
>
> Key: KAFKA-1778
> URL: https://issues.apache.org/jira/browse/KAFKA-1778
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joe Stein
>Assignee: Abhishek Nigam
>
> kafka --controller --elect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1772) Add an Admin message type for request response

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-1772.

Resolution: Won't Fix

> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
>
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1772) Add an Admin message type for request response

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1772:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: KAFKA-1694)

> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
>
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2229) Phase 1: Requests and KafkaApis

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2229:
---
Resolution: Duplicate
  Assignee: Grant Henke  (was: Andrii Biletskyi)
Status: Resolved  (was: Patch Available)

> Phase 1: Requests and KafkaApis
> ---
>
> Key: KAFKA-2229
> URL: https://issues.apache.org/jira/browse/KAFKA-2229
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrii Biletskyi
>Assignee: Grant Henke
> Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2891) Gaps in messages delivered by new consumer after Kafka restart

2015-12-03 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-2891.
---
Resolution: Fixed

This was fixed by setting min.insync.replicas=2 correctly in the tests. Closing 
this since the subtasks have been closed.

> Gaps in messages delivered by new consumer after Kafka restart
> --
>
> Key: KAFKA-2891
> URL: https://issues.apache.org/jira/browse/KAFKA-2891
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Priority: Critical
>
> Replication tests when run with the new consumer with SSL/SASL were failing 
> very often because messages were not being consumed from some topics after a 
> Kafka restart. The fix in KAFKA-2877 has made this a lot better. But I am 
> still seeing some failures (less often now) because a small set of messages 
> are not received after Kafka restart. This failure looks slightly different 
> from the one before the fix for KAFKA-2877 was applied, hence the new defect. 
> The test fails because not all acked messages are received by the consumer, 
> and the number of messages missing are quite small.
> [~benstopford] Are the upgrade tests working reliably with KAFKA-2877 now?
> Not sure if any of these log entries are important:
> {quote}
> [2015-11-25 14:41:12,342] INFO SyncGroup for group test-consumer-group failed 
> due to NOT_COORDINATOR_FOR_GROUP, will find new coordinator and rejoin 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,342] INFO Marking the coordinator 2147483644 dead. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:12,958] INFO Attempt to join group test-consumer-group 
> failed due to unknown member id, resetting and retrying. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2015-11-25 14:41:42,437] INFO Fetch offset null is out of range, resetting 
> offset (org.apache.kafka.clients.consumer.internals.Fetcher)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2229) Phase 1: Requests and KafkaApis

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2229:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: KAFKA-1694)

> Phase 1: Requests and KafkaApis
> ---
>
> Key: KAFKA-2229
> URL: https://issues.apache.org/jira/browse/KAFKA-2229
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrii Biletskyi
>Assignee: Andrii Biletskyi
> Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2947) AlterTopic - protocol and server side implementation

2015-12-03 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2947:
--

 Summary: AlterTopic - protocol and server side implementation
 Key: KAFKA-2947
 URL: https://issues.apache.org/jira/browse/KAFKA-2947
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2946) DeleteTopic - protocol and server side implementation

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2946:
--

Assignee: Grant Henke

> DeleteTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2946
> URL: https://issues.apache.org/jira/browse/KAFKA-2946
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2946) DeleteTopic - protocol and server side implementation

2015-12-03 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2946:
--

 Summary: DeleteTopic - protocol and server side implementation
 Key: KAFKA-2946
 URL: https://issues.apache.org/jira/browse/KAFKA-2946
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2945:
---
Summary: CreateTopic - protocol and server side implementation  (was: 
CreateTopic - Protocol and Server Side Implimentation)

> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2945) CreateTopic - Protocol and Server Side Implimentation

2015-12-03 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2945:
--

 Summary: CreateTopic - Protocol and Server Side Implimentation
 Key: KAFKA-2945
 URL: https://issues.apache.org/jira/browse/KAFKA-2945
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1694) KIP-4: Command line and centralized operations

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1694:
---
Summary: KIP-4: Command line and centralized operations  (was: kafka 
command line and centralized operations)

> KIP-4: Command line and centralized operations
> --
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Grant Henke
>Priority: Critical
> Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
> KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2227) Delete me

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2227:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: KAFKA-1694)

> Delete me
> -
>
> Key: KAFKA-2227
> URL: https://issues.apache.org/jira/browse/KAFKA-2227
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrii Biletskyi
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2228) Delete me

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2228:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: KAFKA-1694)

> Delete me
> -
>
> Key: KAFKA-2228
> URL: https://issues.apache.org/jira/browse/KAFKA-2228
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrii Biletskyi
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-1694) kafka command line and centralized operations

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-1694 started by Grant Henke.
--
> kafka command line and centralized operations
> -
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Grant Henke
>Priority: Critical
> Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
> KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1694) kafka command line and centralized operations

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-1694:
---
Status: Open  (was: Patch Available)

> kafka command line and centralized operations
> -
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Grant Henke
>Priority: Critical
> Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
> KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #202

2015-12-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2825: Add controller failover to existing replication tests

[wangguoz] KAFKA-2942: inadvertent auto-commit when pre-fetching can cause 
message

--
[...truncated 4535 lines...]
org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED
:connect:runtime:checkstyleMain
:connect:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:connect:runtime:processTestResources
:connect:runtime:testClasses
:connect:runtime:checkstyleTest
:connect:runtime:test

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
org.junit.ComparisonFailure: expected: but was:
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.ka

Jenkins build is back to normal : kafka_0.9.0_jdk7 #56

2015-12-03 Thread Apache Jenkins Server
See 



[jira] [Resolved] (KAFKA-2908) Consumer Sporadically Stops Consuming From Partition After Server Restart

2015-12-03 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford resolved KAFKA-2908.
-
Resolution: Cannot Reproduce

Haven't been able to reproduce this issue. I'll add a clause to the system 
tests that identifies this type of problem should it occur again. 

> Consumer Sporadically Stops Consuming From Partition After Server Restart
> -
>
> Key: KAFKA-2908
> URL: https://issues.apache.org/jira/browse/KAFKA-2908
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Ben Stopford
>Assignee: Jason Gustafson
> Attachments: 2015-11-28--001.tar.gz
>
>
> *Context:*
> Instance of the rolling upgrade test. 10s sleeps have been put around node 
> restarts to ensure stability when subsequent nodes go down. Consumer timeout 
> has been set to 60s. Proudcer has been throttled to 100 messages per second. 
> Failure is rare: It occurred once in 60 executions (6 executions per run #276 
> -> #285 of the system_test_branch_builder)
> *Reported Failure:*
> At least one acked message did not appear in the consumed messages. 
> acked_minus_consumed: 16385, 16388, 16391, 16394, 16397, 16400, 16403, 16406, 
> 16409, 16412, 16415, 16418, 16421, 16424, 16427, 16430, 16433, 16436, 16439, 
> 16442, ...plus 1669 more
> *Immediate Observations:*
> * The list of messages not consumed are all in partition 1.
> * Production and Consumption continues throughout the test (there is no 
> complete write or read failure as we have seen elsewhere)
> * The messages ARE present in the data files:
> e.g. 
> {quote}
> Examining missing value 16385. This was written to p1,offset:5453
> => there is an entry in all data files for partition 1 for this (presumably 
> meaning it was replicated correctly)
> kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files 
> worker10/kafka-data-logs/test_topic-1/.log | grep 
> 'offset: 5453'
> worker10,9,8:
> offset: 5453 position: 165346 isvalid: true payloadsize: 5 magic: 0 
> compresscodec: NoCompressionCodec crc: 1502953075 payload: 16385
> => ID 16385 is definitely present in the Kafka logs suggesting a problem 
> consumer-side
> {quote}
> These entries definitely do not appear in the consumer stdout file either. 
> *Timeline:*
> {quote}
> 15:27:02,232 - Producer sends first message
> ..servers 1, 2 are restarted (clean shutdown)
> 15:29:42,718 - Server 3 shutdown complete
> 15:29:42,712 - (Controller fails over): Broker 2 starting become controller 
> state transition (kafka.controller.KafkaController)
> 15:29:42,743 - New leasder is 2 (LeaderChangeListner)
> 15:29:43,239 - WARN Broker 2 ignoring LeaderAndIsr request from controller 2 
> with correlation id 0 epoch 7 for partition (test_topic,1) since its 
> associated leader epoch 8 is old. Current leader epoch is 8 
> (state.change.logger)
> 15:29:45,642 - Producer starts writing messages that are never consumed
> 15:30:10,804 - Last message sent by producer
> 15:31:10,983 - Consumer times out after 60s wait without messages
> {quote}
> Logs for this run are attached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #873

2015-12-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2825: Add controller failover to existing replication tests

[wangguoz] KAFKA-2942: inadvertent auto-commit when pre-fetching can cause 
message

--
[...truncated 1404 lines...]

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.DynamicConfigChangeTest > testProcessNotification PASSED

kafka.server.DynamicConfigChangeTest > testClientQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integr

[jira] [Commented] (KAFKA-2933) Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment

2015-12-03 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038496#comment-15038496
 ] 

Jason Gustafson commented on KAFKA-2933:


[~guozhang] I've been unable to reproduce this locally. Perhaps first we'll 
need to improve the assertion message to say what exactly the problem was. My 
guess is that we've somehow hit a situation where the join group is stuck 
waiting on a session timeout.

> Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment
> -
>
> Key: KAFKA-2933
> URL: https://issues.apache.org/jira/browse/KAFKA-2933
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.1.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>
> {code}
> kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
> java.lang.AssertionError: Did not get valid assignment for partitions 
> [topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, 
> topic1-3, topic1-1, topic2-2] after we changed subscription
> at org.junit.Assert.fail(Assert.java:88)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
> at 
> kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:644)
> at 
> kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:663)
> at 
> kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextConsumerTest.scala:461)
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access
> {code}
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1582/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2825) Add controller failover to existing replication tests

2015-12-03 Thread Anna Povzner (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anna Povzner updated KAFKA-2825:

Reviewer: Guozhang Wang  (was: Geoff Anderson)

> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.1.0
>
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-30 Allow for brokers to have plug-able consensus and meta data storage sub systems

2015-12-03 Thread Jay Kreps
Hey Todd,

I actually agree on both counts.

I would summarize the first comment as "Zookeeper is not hard to
operationalize if you are Todd Palino"--also in that category of
things that are not hard: running 13 miles at 5:00 am. Basically I
totally agree that ZK is now a solved problem at LinkedIn. :-)

Empirically, though, it is really hard for a lot of our users. It is
one of the largest sources of problems we see in people's clusters. We
could perhaps get part of the way by improving our zk usage and
documentation, and it is certainly the case that we could potentially
make things worse in trying to make them better, but I don't think
that is the same as saying there isn't a problem.

I totally agree with your second comment. In some sense what I was
sketching out is just replacing ZK. But part of the design of Kafka
was because we already had ZK. So there might be a way to further
rationalize the metadata log and the data logs if you kind of went
back to first principles and thought about it. I don't have any idea
how, but I share that intuition.

I do think having the controller, though, is quite useful. I think
this pattern of avoiding many rounds of consensus by just doing one
round to pick a leader is a good one. If you think about it Paxos =>
Multi-paxos is basically optimizing by lumping together consensus
rounds on a per message basis into a leader which then handles many
messages, and what Kafka does is kind of like Multi-multi-paxos in
that it lumps together many leadership elections into one central
controller election which then picks all the leaders. In some ways
having central decision makers seems inelegant (aren't we supposed to
be distributed?) but it does allow you to be both very very fast in
making lots of decisions (vs doing thousands of independent leadership
elections) and also to do things that require global knowledge (like
balancing leadership).

Cheers,

-Jay



On Thu, Dec 3, 2015 at 10:05 AM, Todd Palino  wrote:
> This kind of discussion always puts me in mind of stories that start “So I
> wrote my own encryption. How hard can it be?” :)
>
> Joking aside, I do have a few thoughts on this. First I have to echo Joel’s
> perspective on Zookeeper. Honestly, it is one of the few applications we
> can forget about, so I have a hard time understanding pain around running
> it. You set it up, and unless you have a hardware failure to deal with,
> that’s it. Yes, there are ways to abusively use it, just like with any
> application, but Kafka is definitely not one of those use cases. I also
> disagree that it’s hard to share the ZK cluster safely. We do it all the
> time. We share most often with other Kafka clusters, but we also share with
> other applications. This goes back to the “abusive use pattern”. Yes, we
> don’t share with them. Nobody should. Yes, it requires monitoring and you
> have to configure it correctly. But configuration is a one-time charge
> (both in terms of capital expenses and knowledge acquisition) and as
> applications goes, is minimal.
>
> So that said, I also don’t think this is necessarily a bad idea, but this
> is not the approach I would take. You’re picking off a low-level piece and
> talking about reimplementing primitives. I think it is much better to start
> at the top and have a discussion about where we want to end up as an entire
> application at 1.0. For example, a controller-free cluster where you just
> add brokers with no “special node” that handles all the coordination.
> Brokers join and leave the cluster as more of a mesh, where there is no
> specific choke point for control. They negotiate with each other for
> managing partitions, and new partitions are assigned out. We already have a
> concept of partition leadership, which means you don’t actually need to
> coordinate things that apply only to that partition (configs, writes,
> deletes).
>
> Of course, this is just one start of a discussion on it. There are a lot of
> ways you can go with overall design, but I think that trying to swap out
> the consensus system is not the right place to start it. Once you get a
> vision of where you want to be, it is much easier to identify what pieces
> require what levels of coordination between components. Then you can decide
> on what the right way to achieve that coordination is. Maybe the answer
> does end up being to implement internal consensus without Zookeeper or
> another dependency. But as has been said already, consensus systems are
> hard, and the corner cases can be a nightmare.
>
> While I’m generally a fan of plugins in the right places (for example,
> filtering messages produced or consumed on the clients), I think here it
> would lead to more misuse and anti-patterns. I also agree with Jay that we
> should have one thing that is well-managed and rigorously tested when it
> comes to consensus.
>
> -Todd
>
> On Thu, Dec 3, 2015 at 7:18 AM, Joel Koshy  wrote:
>
>> I’m on an extended vacation but "a native consensus implementation in
>>

Jenkins build is back to normal : kafka-trunk-jdk7 #872

2015-12-03 Thread Apache Jenkins Server
See 



Re: [DISCUSS] KIP-30 Allow for brokers to have plug-able consensus and meta data storage sub systems

2015-12-03 Thread Jay Kreps
Hey Joel,

I agree this isn't the most pressing thing for the project (i.e. we
shouldn't start now), but I think that is a separate question for
which direction we would head in.

I think whether it is "worth doing" kind of depends on what you
optimize for. If you consider just the problem of "getting rid of
zookeeper" in a large existing environment like LinkedIn, I agree this
doesn't have a ton of value. LinkedIn has already done the legwork of
operationalizing Zookeeper (which as you'll recall took us several
years to actually master). Once you've mastered it, though, it isn't a
big problem. I think the bigger improvement from that proposal from
LinkedIn's point of view would be removing the hard limit on partition
count by co-locating metadata and moving it out of memory.

But if instead you consider the average user adopting Kafka they are
often starting with very small three node clusters. Zookeeper is
effectively doubling their footprint (at least). They have several
years of operational learning to do to master it, and it roughly
doubles what they need to learn to run. For these people it is a
significant penalty. This is why this is the first question at
virtually every talk and one of the most requested things. I don't
think we can deny that people want this, and when you think about why,
and how their usage differs from a large established environment, I'm
not sure you can say they are wrong to want that. I suspect correcting
this would not particularly help big already established users who
have already mastered ZK but would probably roughly double the rate of
adoption.

-Jay

On Thu, Dec 3, 2015 at 7:18 AM, Joel Koshy  wrote:
> I’m on an extended vacation but "a native consensus implementation in
> Kafka" is one of my favorite topics so I’m compelled to chime in :) I have
> actually been thinking along the same lines that Jay is saying after
> reading the Raft paper - a couple of us at LinkedIn have had casual
> discussions about this (i.e., a Raft-like implementation embedded in Kafka)
> but at this point I’m not sure it is at the point of being a practical or
> necessary effort for the project per-se: I have heard from a few people
> that operating ZooKeeper is difficult but haven’t quite understood why this
> is so. It does need some care in selecting hardware and configuration but
> after that it should just work right? I really think the major concerns are
> caused by incorrect usage in the application (i.e., Kafka) which results in
> weird Kafka cluster behavior and then charging zookeeper for it.
> Furthermore, Raft-like implementations sound doable but I did have the
> concern that it will be very difficult in practice to perfect the
> implementation and Flavio is assuring us that it is indeed difficult :)
>
> Although it is unclear to me if it is worthwhile doing a native consensus
> implementation in Kafka and although I’m willing to take Flavio’s word for
> it given his experience with ZK and although I'm generally a proponent
> of McIlroy's
> views in such decisions
> ,
> I do think it is a super cool project to try out anyway given that the
> commit log is Kafka’s core abstraction and I think the network layer/other
> layers are really solid. It may just turn out to be really really elegant
> and will be a “nice-have” sort of win. i.e., I think it will be nice to not
> have to run one more service for Kafka and neat to have a native consensus
> implementation that we understand and know really well. At this point I’m
> not sure it will be a very big win though since I don’t quite see the
> issues in running ZooKeeper alongside Kafka and it will mean bringing in a
> fair deal of new complexity to Kafka itself.
>
> On the topic of wrapper libraries: from a developer perspective it is easy
> to use ZooKeeper incorrectly and I’m guessing there are more than a few
> anti-patterns in our code especially since we use a wrapper library. So I
> agree with Neha’s viewpoint that using wrappers such as zkclient/curator
> without being well-versed in the wrapper code itself is another cause for
> subtle bugs in our usage. Removing that wrapper is an interesting option
> but would again be a fairly significant effort - this is in effect going to
> be like reimplementing our own wrapper library around ZK. Another option is
> to just have a few committers/contributors to be the “experts/owners” of
> our zkclient/zookeeper usage. E.g., we have contributed a few fixes back to
> zkclient after diving into the code while debugging various issues. However
> these may take some time to get merged in and subsequently pulled into
> Kafka.
>
> On Tue, Dec 1, 2015 at 5:49 PM, Jay Kreps  wrote:
>
>> Hey Flavio,
>>
>> Yeah, I think we are largely in agreement on virtually all points.
>>
>> Where I saw ZK shine was really in in-house infrastructure. LinkedIn had a
>> dozen in-house systems that all used it, and it wouldn't have ma

[jira] [Resolved] (KAFKA-2942) Inadvertent auto-commit when pre-fetching can cause message loss

2015-12-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2942.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.1

Issue resolved by pull request 623
[https://github.com/apache/kafka/pull/623]

> Inadvertent auto-commit when pre-fetching can cause message loss
> 
>
> Key: KAFKA-2942
> URL: https://issues.apache.org/jira/browse/KAFKA-2942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> Before returning from KafkaConsumer.poll(), we update the consumed position 
> and invoke poll(0) to send new fetches. In doing so, it is possible that an 
> auto-commit is triggered which would commit the updated offsets which hasn't 
> yet been returned. If the process then crashes before consuming the messages, 
> there would be a gap in the delivery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2942: inadvertent auto-commit when pre-f...

2015-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/623


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2942) Inadvertent auto-commit when pre-fetching can cause message loss

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038335#comment-15038335
 ] 

ASF GitHub Bot commented on KAFKA-2942:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/623


> Inadvertent auto-commit when pre-fetching can cause message loss
> 
>
> Key: KAFKA-2942
> URL: https://issues.apache.org/jira/browse/KAFKA-2942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> Before returning from KafkaConsumer.poll(), we update the consumed position 
> and invoke poll(0) to send new fetches. In doing so, it is possible that an 
> auto-commit is triggered which would commit the updated offsets which hasn't 
> yet been returned. If the process then crashes before consuming the messages, 
> there would be a gap in the delivery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2825) Add controller failover to existing replication tests

2015-12-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2825:
-
   Resolution: Fixed
 Reviewer: Geoff Anderson  (was: Gwen Shapira)
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.1.0
>
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038298#comment-15038298
 ] 

ASF GitHub Bot commented on KAFKA-2718:
---

Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/613


> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.1
>
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2718: Add logging to investigate intermi...

2015-12-03 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/613


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2875:
-
Reviewer: Ismael Juma

> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2839) Kafka connect log test failing

2015-12-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2839:
-
Reviewer: Gwen Shapira

> Kafka connect log test failing
> --
>
> Key: KAFKA-2839
> URL: https://issues.apache.org/jira/browse/KAFKA-2839
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: jin xing
>
> org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> org.junit.ComparisonFailure: expected: but was:
> at org.junit.Assert.assertEquals(Assert.java:115)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2839) Kafka connect log test failing

2015-12-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2839:
-
Assignee: jin xing

> Kafka connect log test failing
> --
>
> Key: KAFKA-2839
> URL: https://issues.apache.org/jira/browse/KAFKA-2839
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: jin xing
>
> org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> org.junit.ComparisonFailure: expected: but was:
> at org.junit.Assert.assertEquals(Assert.java:115)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2825) Add controller failover to existing replication tests

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038285#comment-15038285
 ] 

ASF GitHub Bot commented on KAFKA-2825:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/618


> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2825: Add controller failover to existin...

2015-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/618


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #201

2015-12-03 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: ConsoleConsumer - Fix number of processed messages count

--
[...truncated 1427 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponse

Re: [DISCUSS] KIP-30 Allow for brokers to have plug-able consensus and meta data storage sub systems

2015-12-03 Thread Todd Palino
This kind of discussion always puts me in mind of stories that start “So I
wrote my own encryption. How hard can it be?” :)

Joking aside, I do have a few thoughts on this. First I have to echo Joel’s
perspective on Zookeeper. Honestly, it is one of the few applications we
can forget about, so I have a hard time understanding pain around running
it. You set it up, and unless you have a hardware failure to deal with,
that’s it. Yes, there are ways to abusively use it, just like with any
application, but Kafka is definitely not one of those use cases. I also
disagree that it’s hard to share the ZK cluster safely. We do it all the
time. We share most often with other Kafka clusters, but we also share with
other applications. This goes back to the “abusive use pattern”. Yes, we
don’t share with them. Nobody should. Yes, it requires monitoring and you
have to configure it correctly. But configuration is a one-time charge
(both in terms of capital expenses and knowledge acquisition) and as
applications goes, is minimal.

So that said, I also don’t think this is necessarily a bad idea, but this
is not the approach I would take. You’re picking off a low-level piece and
talking about reimplementing primitives. I think it is much better to start
at the top and have a discussion about where we want to end up as an entire
application at 1.0. For example, a controller-free cluster where you just
add brokers with no “special node” that handles all the coordination.
Brokers join and leave the cluster as more of a mesh, where there is no
specific choke point for control. They negotiate with each other for
managing partitions, and new partitions are assigned out. We already have a
concept of partition leadership, which means you don’t actually need to
coordinate things that apply only to that partition (configs, writes,
deletes).

Of course, this is just one start of a discussion on it. There are a lot of
ways you can go with overall design, but I think that trying to swap out
the consensus system is not the right place to start it. Once you get a
vision of where you want to be, it is much easier to identify what pieces
require what levels of coordination between components. Then you can decide
on what the right way to achieve that coordination is. Maybe the answer
does end up being to implement internal consensus without Zookeeper or
another dependency. But as has been said already, consensus systems are
hard, and the corner cases can be a nightmare.

While I’m generally a fan of plugins in the right places (for example,
filtering messages produced or consumed on the clients), I think here it
would lead to more misuse and anti-patterns. I also agree with Jay that we
should have one thing that is well-managed and rigorously tested when it
comes to consensus.

-Todd

On Thu, Dec 3, 2015 at 7:18 AM, Joel Koshy  wrote:

> I’m on an extended vacation but "a native consensus implementation in
> Kafka" is one of my favorite topics so I’m compelled to chime in :) I have
> actually been thinking along the same lines that Jay is saying after
> reading the Raft paper - a couple of us at LinkedIn have had casual
> discussions about this (i.e., a Raft-like implementation embedded in Kafka)
> but at this point I’m not sure it is at the point of being a practical or
> necessary effort for the project per-se: I have heard from a few people
> that operating ZooKeeper is difficult but haven’t quite understood why this
> is so. It does need some care in selecting hardware and configuration but
> after that it should just work right? I really think the major concerns are
> caused by incorrect usage in the application (i.e., Kafka) which results in
> weird Kafka cluster behavior and then charging zookeeper for it.
> Furthermore, Raft-like implementations sound doable but I did have the
> concern that it will be very difficult in practice to perfect the
> implementation and Flavio is assuring us that it is indeed difficult :)
>
> Although it is unclear to me if it is worthwhile doing a native consensus
> implementation in Kafka and although I’m willing to take Flavio’s word for
> it given his experience with ZK and although I'm generally a proponent
> of McIlroy's
> views in such decisions
> <
> https://en.wikipedia.org/wiki/Unix_philosophy#Doug_McIlroy_on_Unix_programming
> >,
> I do think it is a super cool project to try out anyway given that the
> commit log is Kafka’s core abstraction and I think the network layer/other
> layers are really solid. It may just turn out to be really really elegant
> and will be a “nice-have” sort of win. i.e., I think it will be nice to not
> have to run one more service for Kafka and neat to have a native consensus
> implementation that we understand and know really well. At this point I’m
> not sure it will be a very big win though since I don’t quite see the
> issues in running ZooKeeper alongside Kafka and it will mean bringing in a
> fair deal of new complexity to Kafka itself.
>
> On the topic of wrapp

[jira] [Assigned] (KAFKA-1694) kafka command line and centralized operations

2015-12-03 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-1694:
--

Assignee: Grant Henke  (was: Andrii Biletskyi)

> kafka command line and centralized operations
> -
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Grant Henke
>Priority: Critical
> Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
> KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #55

2015-12-03 Thread Apache Jenkins Server
See 

Changes:

[geoff] Manually ported changes in 8c3c9548b636cdf760d2537afe115942d13bc003 to

[wangguoz] MINOR: ConsoleConsumer - Fix number of processed messages count

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision 2b4c5dc290ec6d89206f31856204b713e21c7d2b 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2b4c5dc290ec6d89206f31856204b713e21c7d2b
 > git rev-list 20e958861e8fc0aef19a77c00c1fa0a2cd6312ad # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson1591285061545941544.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 17.631 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson7563268711906014224.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 --stacktrace clean jarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 655922 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.doWriteAction(DefaultFileLockManager.java:173)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.writeFile(DefaultFileLockManager.java:163)
at 
org.gradle.cache.internal.DefaultCacheAccess$UnitOfWorkFileAccess.writeFile(DefaultCacheAccess.java:404)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache.put(DefaultMultiProcessSafePersistentIndexedCache.java:49)
at 
org.g

[jira] [Created] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2015-12-03 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2944:


 Summary: NullPointerException in KafkaConfigStorage when config 
storage starts right before shutdown request
 Key: KAFKA-2944
 URL: https://issues.apache.org/jira/browse/KAFKA-2944
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Affects Versions: 0.9.0.0
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Relevant log where you can see a config update starting, then the request to 
shutdown happens and we end up with a NullPointerException:

{quote}
[2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
writing updated task configurations 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
(org.apache.kafka.connect.runtime.Connect)
[2015-12-03 09:12:56,224] INFO Stopping REST server 
(org.apache.kafka.connect.runtime.rest.RestServer)
[2015-12-03 09:12:56,227] INFO Stopped 
ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
(org.eclipse.jetty.server.ServerConnector)
[2015-12-03 09:12:56,234] INFO Stopped 
o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
(org.eclipse.jetty.server.handler.ContextHandler)
[2015-12-03 09:12:56,235] INFO REST server stopped 
(org.apache.kafka.connect.runtime.rest.RestServer)
[2015-12-03 09:12:56,235] INFO Herder stopping 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
thread (org.apache.kafka.connect.util.KafkaBasedLog)
java.lang.NullPointerException
at 
org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
at 
org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
at 
org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
at 
org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
at 
org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
at 
org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
at 
org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
at 
org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
[2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
(org.apache.kafka.connect.storage.KafkaConfigStorage)
java.util.concurrent.TimeoutException: Timed out waiting for future
at 
org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
at 
org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
at java.lang.Thread.run(Thread.java:745)
[2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
retrying after backoff: 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
org.apache.kafka.connect.errors.ConnectException: Error writing root 
configuration to Kafka
at 
org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:1

[GitHub] kafka pull request: Minor: ConsoleConsumer - Fix number of process...

2015-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/617


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2943) Transient Failure in kafka.producer.SyncProducerTest.testReachableServer

2015-12-03 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2943:


 Summary: Transient Failure in 
kafka.producer.SyncProducerTest.testReachableServer
 Key: KAFKA-2943
 URL: https://issues.apache.org/jira/browse/KAFKA-2943
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


{code}
Stacktrace

java.lang.AssertionError: Unexpected failure sending message to broker. null
at org.junit.Assert.fail(Assert.java:88)
at 
kafka.producer.SyncProducerTest.testReachableServer(SyncProducerTest.scala:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Standard Output

[2015-12-03 07:10:17,494] ERROR [Replica Manager on Broker 0]: Error processing 
append operation on partition [minisrtest,0] (kafka.server.ReplicaManager:103)
kafka.common.NotEnoughReplicasException: Number of insync replicas for 
partition [minisrtest,0] is [1], below required minimum [2]
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:438)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
at kafka.utils.CoreUtils$.inLock(Cor

[jira] [Commented] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-12-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038061#comment-15038061
 ] 

Guozhang Wang commented on KAFKA-2718:
--

Another example: 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1606/testReport/junit/kafka.api/SaslSslConsumerTest/testListTopics/

> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.1
>
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2933) Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment

2015-12-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038059#comment-15038059
 ] 

Guozhang Wang commented on KAFKA-2933:
--

Another example: 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1606/testReport/junit/kafka.api/PlaintextConsumerTest/testMultiConsumerDefaultAssignment/

> Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment
> -
>
> Key: KAFKA-2933
> URL: https://issues.apache.org/jira/browse/KAFKA-2933
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.1.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>
> {code}
> kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
> java.lang.AssertionError: Did not get valid assignment for partitions 
> [topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, 
> topic1-3, topic1-1, topic2-2] after we changed subscription
> at org.junit.Assert.fail(Assert.java:88)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
> at 
> kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:644)
> at 
> kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:663)
> at 
> kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextConsumerTest.scala:461)
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access
> {code}
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1582/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Manually ported changes in 8c3c9548b636...

2015-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/620


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Add Rolling Upgrade Notes to Security Docs

2015-12-03 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/625

Add Rolling Upgrade Notes to Security Docs

And added info about the krb5.conf file as we don't appear to mention that 
in the current docs

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka security_docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/625.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #625


commit 5a0058ea528899cd5bb5b2b38e3c82b3fd0c23d8
Author: Ben Stopford 
Date:   2015-12-03T15:14:45Z

Include krb5 parameter in SASL docs

commit 8b4b6f277f8130ce3d1286c89da748b9d2a8a46c
Author: Ben Stopford 
Date:   2015-12-03T15:47:36Z

Added documentation for rolling upgrade

commit 46ffa30bbe16629e583275bbfd25c8e3acf4da8e
Author: Ben Stopford 
Date:   2015-12-03T16:00:44Z

formatting




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-30 Allow for brokers to have plug-able consensus and meta data storage sub systems

2015-12-03 Thread Joel Koshy
I’m on an extended vacation but "a native consensus implementation in
Kafka" is one of my favorite topics so I’m compelled to chime in :) I have
actually been thinking along the same lines that Jay is saying after
reading the Raft paper - a couple of us at LinkedIn have had casual
discussions about this (i.e., a Raft-like implementation embedded in Kafka)
but at this point I’m not sure it is at the point of being a practical or
necessary effort for the project per-se: I have heard from a few people
that operating ZooKeeper is difficult but haven’t quite understood why this
is so. It does need some care in selecting hardware and configuration but
after that it should just work right? I really think the major concerns are
caused by incorrect usage in the application (i.e., Kafka) which results in
weird Kafka cluster behavior and then charging zookeeper for it.
Furthermore, Raft-like implementations sound doable but I did have the
concern that it will be very difficult in practice to perfect the
implementation and Flavio is assuring us that it is indeed difficult :)

Although it is unclear to me if it is worthwhile doing a native consensus
implementation in Kafka and although I’m willing to take Flavio’s word for
it given his experience with ZK and although I'm generally a proponent
of McIlroy's
views in such decisions
,
I do think it is a super cool project to try out anyway given that the
commit log is Kafka’s core abstraction and I think the network layer/other
layers are really solid. It may just turn out to be really really elegant
and will be a “nice-have” sort of win. i.e., I think it will be nice to not
have to run one more service for Kafka and neat to have a native consensus
implementation that we understand and know really well. At this point I’m
not sure it will be a very big win though since I don’t quite see the
issues in running ZooKeeper alongside Kafka and it will mean bringing in a
fair deal of new complexity to Kafka itself.

On the topic of wrapper libraries: from a developer perspective it is easy
to use ZooKeeper incorrectly and I’m guessing there are more than a few
anti-patterns in our code especially since we use a wrapper library. So I
agree with Neha’s viewpoint that using wrappers such as zkclient/curator
without being well-versed in the wrapper code itself is another cause for
subtle bugs in our usage. Removing that wrapper is an interesting option
but would again be a fairly significant effort - this is in effect going to
be like reimplementing our own wrapper library around ZK. Another option is
to just have a few committers/contributors to be the “experts/owners” of
our zkclient/zookeeper usage. E.g., we have contributed a few fixes back to
zkclient after diving into the code while debugging various issues. However
these may take some time to get merged in and subsequently pulled into
Kafka.

On Tue, Dec 1, 2015 at 5:49 PM, Jay Kreps  wrote:

> Hey Flavio,
>
> Yeah, I think we are largely in agreement on virtually all points.
>
> Where I saw ZK shine was really in in-house infrastructure. LinkedIn had a
> dozen in-house systems that all used it, and it wouldn't have made sense
> for any of those systems to build their own. Likewise when we started Kafka
> there was really only 1-3 developers for a very long time so doing anything
> more custom would have been out of reach. I guess the characteristic of
> in-house infrastructure is that it has to be cheap to build, and it often
> ends up having lots and lots of other system dependencies which is fine so
> long as they are things you already run.
>
> For an open source product, though you are kind of optimizing with a
> different objective function. You are trying to make the thing easy to get
> going with and willing to spend more time to get there. That out-of-the-box
> experience of how easy it is to adopt and operationalize is the big issue
> in how successful the system is. I think using external consensus systems
> ends up not being quite as good here because many people won't already have
> the dependency as part of their stack and for them you effectively double
> the operational footprint they have to master. I think this is why this is
> such a loud and persistent complaint (Joe is right, it is the first
> question asked at every talk I give)--they want to adopt one distributed
> thingy (Kafka) which is hard enough to monitor, configure, understand etc,
> but to get that we make them learn a second one too. Even when people
> already have zk in their stack that doesn't really help because it turns
> out that sharing the zk cluster safely is usually not easy for people.
>
> Actually ZK itself is really great in this way--i think a lot of it's
> success comes from being a totally simple standalone component. If it
> required some other system to run (dunno what, but say HDFS or MySQL or
> whatever) I think it would be far less successful.
>
> 

[jira] [Updated] (KAFKA-2870) Support configuring operationRetryTimeout of underlying ZkClient through ZkUtils constructor

2015-12-03 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak updated KAFKA-2870:
---
Status: Patch Available  (was: Open)

> Support configuring operationRetryTimeout of underlying ZkClient through 
> ZkUtils constructor
> 
>
> Key: KAFKA-2870
> URL: https://issues.apache.org/jira/browse/KAFKA-2870
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Stevo Slavic
>Assignee: Jakub Nowak
>Priority: Minor
>
> Currently (Kafka 0.9.0.0 RC3) it's not possible to have underlying 
> {{ZkClient}} {{operationRetryTimeout}} configured and use Kafka's 
> {{ZKStringSerializer}} in {{ZkUtils}} instance.
> Please support configuring {{operationRetryTimeout}} via another 
> {{ZkUtils.apply}} factory method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2870) Support configuring operationRetryTimeout of underlying ZkClient through ZkUtils constructor

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15037752#comment-15037752
 ] 

ASF GitHub Bot commented on KAFKA-2870:
---

GitHub user Mszak opened a pull request:

https://github.com/apache/kafka/pull/624

KAFKA-2870: add optional operationRetryTimeout parameter to apply method in 
ZKUtils.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Mszak/kafka kafka-2870

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/624.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #624


commit 0432a6c307e0acec4424fbd84aa13ad61566e0b3
Author: Jakub Nowak 
Date:   2015-12-03T12:44:54Z

Add optional operationRetryTimeoutInMillis field in ZKUtils apply method.




> Support configuring operationRetryTimeout of underlying ZkClient through 
> ZkUtils constructor
> 
>
> Key: KAFKA-2870
> URL: https://issues.apache.org/jira/browse/KAFKA-2870
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Stevo Slavic
>Assignee: Jakub Nowak
>Priority: Minor
>
> Currently (Kafka 0.9.0.0 RC3) it's not possible to have underlying 
> {{ZkClient}} {{operationRetryTimeout}} configured and use Kafka's 
> {{ZKStringSerializer}} in {{ZkUtils}} instance.
> Please support configuring {{operationRetryTimeout}} via another 
> {{ZkUtils.apply}} factory method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2870: add optional operationRetryTimeout...

2015-12-03 Thread Mszak
GitHub user Mszak opened a pull request:

https://github.com/apache/kafka/pull/624

KAFKA-2870: add optional operationRetryTimeout parameter to apply method in 
ZKUtils.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Mszak/kafka kafka-2870

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/624.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #624


commit 0432a6c307e0acec4424fbd84aa13ad61566e0b3
Author: Jakub Nowak 
Date:   2015-12-03T12:44:54Z

Add optional operationRetryTimeoutInMillis field in ZKUtils apply method.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2870) Support configuring operationRetryTimeout of underlying ZkClient through ZkUtils constructor

2015-12-03 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak reassigned KAFKA-2870:
--

Assignee: Jakub Nowak

> Support configuring operationRetryTimeout of underlying ZkClient through 
> ZkUtils constructor
> 
>
> Key: KAFKA-2870
> URL: https://issues.apache.org/jira/browse/KAFKA-2870
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Stevo Slavic
>Assignee: Jakub Nowak
>Priority: Minor
>
> Currently (Kafka 0.9.0.0 RC3) it's not possible to have underlying 
> {{ZkClient}} {{operationRetryTimeout}} configured and use Kafka's 
> {{ZKStringSerializer}} in {{ZkUtils}} instance.
> Please support configuring {{operationRetryTimeout}} via another 
> {{ZkUtils.apply}} factory method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2893) Add Negative Partition Seek Check

2015-12-03 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15037651#comment-15037651
 ] 

jin xing commented on KAFKA-2893:
-

I can reproduce this;
when we call consumer.seek(partition,offset), the offset must be bigger than 
the smallest LogSegment's baseOffset(beginning of the log) and smaller than log 
size(end of the log); 
If not, we will get "Fetch offset null is out of range";
Setting offset to be negative is obvious invalid, maybe we can reject this 
request in the consumer side, without sending it to the broker;
Am I right?

> Add Negative Partition Seek Check
> -
>
> Key: KAFKA-2893
> URL: https://issues.apache.org/jira/browse/KAFKA-2893
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
>
> When adding add seek that is a negative number, there isn't a check. When you 
> do give a negative number, you get the following output:
> {{2015-11-25 13:54:16 INFO  Fetcher:567 - Fetch offset null is out of range, 
> resetting offset}}
> Code to replicate:
> KafkaConsumer consumer = new KafkaConsumer String>(props);
> TopicPartition partition = new TopicPartition(topic, 0);
> consumer.assign(Arrays.asList(partition));
> consumer.seek(partition, -1);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)