[jira] [Commented] (KAFKA-3494) mbeans overwritten with identical clients on a single jvm

2016-05-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271902#comment-15271902
 ] 

ASF GitHub Bot commented on KAFKA-3494:
---

GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/1323

KAFKA-3494: add metric id to client mbeans

KafkaConsumer and KafkaProducer mbeans currently only have a client-id 
granularity. Client-id represents a logical name for an application.

Given that quotas encourage reuse of client-ids, it's not unexpected to 
have multiple clients share the same client-id within the same jvm. When a 
later client gets instantiated with the same client-id as an earlier client in 
the same jvm, JmxReporter unregisters the original client's mbeans that collide 
with the new client's mbeans.

This commit makes client mbeans have a metric-id granularity to prevent 
mbean collision and the original client mbean unregistration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-3494

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1323.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1323


commit 40a0189e856ba42be396bd2694e5719e8e29631b
Author: Onur Karaman 
Date:   2016-05-04T23:34:28Z

add metric id to client mbeans

KafkaConsumer and KafkaProducer mbeans currently only have a client-id 
granularity. Client-id represents a logical name for an application.

Given that quotas encourage reuse of client-ids, it's not unexpected to 
have multiple clients share the same client-id within the same jvm. When a 
later client gets instantiated with the same client-id as an earlier client in 
the same jvm, JmxReporter unregisters the original client's mbeans that collide 
with the new client's mbeans.

This commit makes client mbeans have a metric-id granularity to prevent 
mbean collision and the original client mbean unregistration.




> mbeans overwritten with identical clients on a single jvm
> -
>
> Key: KAFKA-3494
> URL: https://issues.apache.org/jira/browse/KAFKA-3494
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>
> Quotas today are implemented on a (client-id, broker) granularity. I think 
> one of the motivating factors for using a simple quota id was to allow for 
> flexibility in the granularity of the quota enforcement. For instance, entire 
> services can share the same id to get some form of (service, broker) 
> granularity quotas. From my understanding, client-id was chosen as the quota 
> id because it's a property that already exists on the clients and reusing it 
> had relatively low impact.
> Continuing the above example, let's say a service spins up multiple 
> KafkaConsumers in one jvm sharing the same client-id because they want those 
> consumers to be quotad as a single entity. Sharing client-ids within a single 
> jvm would cause problems in client metrics since the mbeans tags only go as 
> granular as the client-id.
> An easy example is kafka-metrics count. Here's a sample code snippet:
> {code}
> package org.apache.kafka.clients.consumer;
> import java.util.Collections;
> import java.util.Properties;
> import org.apache.kafka.common.TopicPartition;
> public class KafkaConsumerMetrics {
> public static void main(String[] args) throws InterruptedException {
> Properties properties = new Properties();
> properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9092");
> properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "test");
> properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.StringDeserializer");
> 
> properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.StringDeserializer");
> properties.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, 
> "testclientid");
> KafkaConsumer kc1 = new KafkaConsumer<>(properties);
> KafkaConsumer kc2 = new KafkaConsumer<>(properties);
> kc1.assign(Collections.singletonList(new TopicPartition("t1", 0)));
> while (true) {
> kc1.poll(1000);
> System.out.println("kc1 metrics: " + kc1.metrics().size());
> System.out.println("kc2 metrics: " + kc2.metrics().size());
> Thread.sleep(1000);
> }
> }
> }
> {code}
> jconsole shows one mbean 
> kafka.consumer:type=kafka-metrics-count,client-id=testclientid consistently 
> with value 40.
> but stdout 

[GitHub] kafka pull request: KAFKA-3494: add metric id to client mbeans

2016-05-04 Thread onurkaraman
GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/1323

KAFKA-3494: add metric id to client mbeans

KafkaConsumer and KafkaProducer mbeans currently only have a client-id 
granularity. Client-id represents a logical name for an application.

Given that quotas encourage reuse of client-ids, it's not unexpected to 
have multiple clients share the same client-id within the same jvm. When a 
later client gets instantiated with the same client-id as an earlier client in 
the same jvm, JmxReporter unregisters the original client's mbeans that collide 
with the new client's mbeans.

This commit makes client mbeans have a metric-id granularity to prevent 
mbean collision and the original client mbean unregistration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-3494

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1323.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1323


commit 40a0189e856ba42be396bd2694e5719e8e29631b
Author: Onur Karaman 
Date:   2016-05-04T23:34:28Z

add metric id to client mbeans

KafkaConsumer and KafkaProducer mbeans currently only have a client-id 
granularity. Client-id represents a logical name for an application.

Given that quotas encourage reuse of client-ids, it's not unexpected to 
have multiple clients share the same client-id within the same jvm. When a 
later client gets instantiated with the same client-id as an earlier client in 
the same jvm, JmxReporter unregisters the original client's mbeans that collide 
with the new client's mbeans.

This commit makes client mbeans have a metric-id granularity to prevent 
mbean collision and the original client mbean unregistration.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #590

2016-05-04 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: KAFKA-3659: Handle coordinator disconnects mor...

2016-05-04 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1322

KAFKA-3659: Handle coordinator disconnects more gracefully in client



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3659

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1322.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1322


commit e3d0f3f950fc7005e690510b0f66a19d4ca4c10c
Author: Jason Gustafson 
Date:   2016-05-05T00:33:36Z

KAFKA-3659: Handle coordinator disconnects more gracefully in client




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3656) Avoid stressing system more when already under stress

2016-05-04 Thread Alexey Raga (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271880#comment-15271880
 ] 

Alexey Raga commented on KAFKA-3656:


Also, can it be backported to 0.9.0.1 ? It causes us lots of pain when by dying 
with OutOfMemoryException.

> Avoid stressing system more when already under stress
> -
>
> Key: KAFKA-3656
> URL: https://issues.apache.org/jira/browse/KAFKA-3656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Alexey Raga
>Assignee: Liquan Pei
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> I am working with Kafka Connect now and I am having error messages like that:
> {code}
> [2016-05-04 03:11:28,226] ERROR Failed to flush 
> WorkerSourceTask{id=geo-connector-0}, timed out while waiting for producer to 
> flush outstanding messages, 151860 left ([FAILED toString()]) 
> (org.apache.kafka.connect.runtime.WorkerSourceTask:237)
> [2016-05-04 03:11:28,227] ERROR Failed to commit offsets for 
> WorkerSourceTask{id=geo-connector-0} 
> (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
> {code}
> I didn't figure out the reason why Connect would pull so many records into 
> memory when it clearly can't produce that fast and I don't yet know why 
> producing messages is slow.
> But the part of {{151860 left ([FAILED toString()]) }} is interesting and I 
> looked at the code and found this:
> {code}
> if (timeoutMs <= 0) {
> log.error(
> "Failed to flush {}, timed out while waiting 
> for producer to flush outstanding "
> + "messages, {} left ({})", this, 
> outstandingMessages.size(), outstandingMessages);
> finishFailedFlush();
> return false;
> }
> {code}
> So when the connector is under stress and, assuming {{151860}} messages, 
> under a heavy memory pressure the code choses to take pretty much {{4 * 
> 151860}} byte arrays and to convert it to a java string.
> This not only eats more memory and adds to GC, but is also useless for 
> logging because the actual string, if it wouldn't fail, would look like:
> {code}
> (topic=lamington--geo-connector, partition=null, key=null, 
> value=[B@62c66f62=ProducerRecord(topic=lamington--geo-connector, 
> partition=null, key=null, value=[B@62c66f62, 
> ProducerRecord(topic=lamington--geo-connector, partition=null, key=null, .
> {code}
> I think it is a bug and a string representation of the outstanding messages 
> should be removed from the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3656) Avoid stressing system more when already under stress

2016-05-04 Thread Alexey Raga (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271876#comment-15271876
 ] 

Alexey Raga commented on KAFKA-3656:


Thanks, are the binaries published somewhere?

> Avoid stressing system more when already under stress
> -
>
> Key: KAFKA-3656
> URL: https://issues.apache.org/jira/browse/KAFKA-3656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Alexey Raga
>Assignee: Liquan Pei
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> I am working with Kafka Connect now and I am having error messages like that:
> {code}
> [2016-05-04 03:11:28,226] ERROR Failed to flush 
> WorkerSourceTask{id=geo-connector-0}, timed out while waiting for producer to 
> flush outstanding messages, 151860 left ([FAILED toString()]) 
> (org.apache.kafka.connect.runtime.WorkerSourceTask:237)
> [2016-05-04 03:11:28,227] ERROR Failed to commit offsets for 
> WorkerSourceTask{id=geo-connector-0} 
> (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
> {code}
> I didn't figure out the reason why Connect would pull so many records into 
> memory when it clearly can't produce that fast and I don't yet know why 
> producing messages is slow.
> But the part of {{151860 left ([FAILED toString()]) }} is interesting and I 
> looked at the code and found this:
> {code}
> if (timeoutMs <= 0) {
> log.error(
> "Failed to flush {}, timed out while waiting 
> for producer to flush outstanding "
> + "messages, {} left ({})", this, 
> outstandingMessages.size(), outstandingMessages);
> finishFailedFlush();
> return false;
> }
> {code}
> So when the connector is under stress and, assuming {{151860}} messages, 
> under a heavy memory pressure the code choses to take pretty much {{4 * 
> 151860}} byte arrays and to convert it to a java string.
> This not only eats more memory and adds to GC, but is also useless for 
> logging because the actual string, if it wouldn't fail, would look like:
> {code}
> (topic=lamington--geo-connector, partition=null, key=null, 
> value=[B@62c66f62=ProducerRecord(topic=lamington--geo-connector, 
> partition=null, key=null, value=[B@62c66f62, 
> ProducerRecord(topic=lamington--geo-connector, partition=null, key=null, .
> {code}
> I think it is a bug and a string representation of the outstanding messages 
> should be removed from the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-725) Broker Exception: Attempt to read with a maximum offset less than start offset

2016-05-04 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao reopened KAFKA-725:
---

Reopen this jira since the fix exposes a new issue. When the leader switches 
(say due to leader balancing), the new leader's HW can actually be smaller than 
the previous leader's HW since HW is propagated asynchronously. The new 
leader's log end offset is >= than the previous leader's HW and eventually its 
HW will move to its log end offset. Before that happens, if a consumer fetches 
data using previous leader's HW, with the patch, the consumer will get 
OffsetOutOfRangeException and thus has to reset the offset, which is bad. 
Without the patch, the consumer will get an empty response instead.

So, it seems that we should revert the changes in this patch.

> Broker Exception: Attempt to read with a maximum offset less than start offset
> --
>
> Key: KAFKA-725
> URL: https://issues.apache.org/jira/browse/KAFKA-725
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>Assignee: Stig Rohde Døssing
> Fix For: 0.10.0.0
>
>
> I have a simple consumer that's reading from a single topic/partition pair. 
> Running it seems to trigger these messages on the broker periodically:
> 2013/01/22 23:04:54.936 ERROR [KafkaApis] [kafka-request-handler-4] [kafka] 
> []  [KafkaApi-466] error when processing request (MyTopic,4,7951732,2097152)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (7951715) less than the start offset (7951732).
> at kafka.log.LogSegment.read(LogSegment.scala:105)
> at kafka.log.Log.read(Log.scala:390)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:372)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:330)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:326)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.map(Map.scala:93)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:326)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:165)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:164)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at 
> kafka.server.KafkaApis.maybeUnblockDelayedFetchRequests(KafkaApis.scala:164)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:186)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:185)
> at scala.collection.immutable.Map$Map2.foreach(Map.scala:127)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:185)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> at java.lang.Thread.run(Thread.java:619)
> When I shut the consumer down, I don't see the exceptions anymore.
> This is the code that my consumer is running:
>   while(true) {
> // we believe the consumer to be connected, so try and use it for 
> a fetch request
> val request = new FetchRequestBuilder()
>   .addFetch(topic, partition, nextOffset, fetchSize)
>   .maxWait(Int.MaxValue)
>   // TODO for super high-throughput, might be worth waiting for 
> more bytes
>   .minBytes(1)
>   .build
> debug("Fetching messages for stream %s and offset %s." format 
> (streamPartition, nextOffset))
> val messages = connectedConsumer.fetch(request)
> debug("Fetch complete for stream %s and offset %s. Got messages: 
> %s" format (streamPartition, nextOffset, messages))
> if (messages.hasError) {
>   warn("Got error code from broker for %s: %s. Shutting down 
> consumer to trigger a reconnect." format (streamPartition, 
> messages.errorCode(topic, partition)))
>   ErrorMapping.maybeThrowException(messages.errorCode(topic, 
> partition))
> }
> messages.messageSet(topic, 

Build failed in Jenkins: kafka-trunk-jdk7 #1252

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3527: Consumer commitAsync should not expose internal exceptions

[me] MINOR: Fixes to AWS init script for testing

--
[...truncated 2194 lines...]

kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED

kafka.admin.AclCommandTest > testAclCli PASSED

kafka.admin.AclCommandTest > testProducerConsumerCli PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #589

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3652; Return error response for unsupported version of

--
[...truncated 3305 lines...]

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > 

[GitHub] kafka pull request: MINOR: Add Kafka Streams API / upgrade notes

2016-05-04 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1321

MINOR: Add Kafka Streams API / upgrade notes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KStreamsJavaDoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1321.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1321


commit 7fa28e79f949ecb3c3843634b5ef473ce2c979fa
Author: Guozhang Wang 
Date:   2016-05-05T03:20:37Z

add Kafka Streams API / upgrade notes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10.0-jdk7 #52

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3527: Consumer commitAsync should not expose internal exceptions

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.10.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.10.0^{commit} # timeout=10
Checking out Revision e9d10108b47018578a53d6863084c41baa3bb579 
(refs/remotes/origin/0.10.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e9d10108b47018578a53d6863084c41baa3bb579
 > git rev-list 91130e4242f816a97a0e81a242ac41e5107c # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson6531668185732436216.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 24.884 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson6297926211127538195.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
org.gradle.api.internal.changedetection.state.FileCollectionSnapshotImpl cannot 
be cast to 
org.gradle.api.internal.changedetection.state.OutputFilesCollectionSnapshotter$OutputFilesSnapshot

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 25.088 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: Minor fixes to AWS init script for testing

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1309


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3527) Consumer commitAsync should not expose internal exceptions

2016-05-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271802#comment-15271802
 ] 

ASF GitHub Bot commented on KAFKA-3527:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1300


> Consumer commitAsync should not expose internal exceptions
> --
>
> Key: KAFKA-3527
> URL: https://issues.apache.org/jira/browse/KAFKA-3527
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Liquan Pei
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> Currently we expose some internal exceptions to the user in the consumer's 
> OffsetCommitCallback (e.g. group load in progress, not coordinator for 
> group). We should convert these to instances of CommitFailedException and 
> provide a clear message to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3527) Consumer commitAsync should not expose internal exceptions

2016-05-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3527.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0
   0.10.1.0

Issue resolved by pull request 1300
[https://github.com/apache/kafka/pull/1300]

> Consumer commitAsync should not expose internal exceptions
> --
>
> Key: KAFKA-3527
> URL: https://issues.apache.org/jira/browse/KAFKA-3527
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Liquan Pei
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> Currently we expose some internal exceptions to the user in the consumer's 
> OffsetCommitCallback (e.g. group load in progress, not coordinator for 
> group). We should convert these to instances of CommitFailedException and 
> provide a clear message to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3527: Consumer commitAsync should not ex...

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1300


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3565) Producer's throughput lower with compressed data after KIP-31/32

2016-05-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271795#comment-15271795
 ] 

Jiangjie Qin commented on KAFKA-3565:
-

[~junrao] I noticed that the consumer tests finished very quickly because the 
consumption is much faster than producing. So if a producer produces for 30 
seconds, it is possible that the consumer consumes everything in 3 seconds. 
This might cause the test result to be inaccurate. I will re-run the test with 
more data and let the consumers to consume for longer time.

> Producer's throughput lower with compressed data after KIP-31/32
> 
>
> Key: KAFKA-3565
> URL: https://issues.apache.org/jira/browse/KAFKA-3565
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Relative offsets were introduced by KIP-31 so that the broker does not have 
> to recompress data (this was previously required after offsets were 
> assigned). The implicit assumption is that reducing CPU usage required by 
> recompression would mean that producer throughput for compressed data would 
> increase.
> However, this doesn't seem to be the case:
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--012.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   59.030 seconds
> {"records_per_sec": 519418.343653, "mb_per_sec": 49.54}
> {code}
> Full results: https://gist.github.com/ijuma/0afada4ff51ad6a5ac2125714d748292
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--013.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   1 minute 0.243 seconds
> {"records_per_sec": 427308.818848, "mb_per_sec": 40.75}
> {code}
> Full results: https://gist.github.com/ijuma/e49430f0548c4de5691ad47696f5c87d
> The difference for the uncompressed case is smaller (and within what one 
> would expect given the additional size overhead caused by the timestamp 
> field):
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--010.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 4.176 seconds
> {"records_per_sec": 321018.17747, "mb_per_sec": 30.61}
> {code}
> Full results: https://gist.github.com/ijuma/5fec369d686751a2d84debae8f324d4f
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--014.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 5.079 seconds
> {"records_per_sec": 291777.608696, "mb_per_sec": 27.83}
> {code}
> Full results: https://gist.github.com/ijuma/1d35bd831ff9931448b0294bd9b787ed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #51

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3652; Return error response for unsupported version of

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.10.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.10.0^{commit} # timeout=10
Checking out Revision 91130e4242f816a97a0e81a242ac41e5107c 
(refs/remotes/origin/0.10.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 91130e4242f816a97a0e81a242ac41e5107c
 > git rev-list 21351b5755a8ab90b79c6090d97389325a25fc4a # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson7034814062045288728.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 23.948 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-0.10.0-jdk7] $ /bin/bash -xe /tmp/hudson3034822764531880025.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:streams:examples:clean
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
org.gradle.api.internal.changedetection.state.FileCollectionSnapshotImpl cannot 
be cast to 
org.gradle.api.internal.changedetection.state.OutputFilesCollectionSnapshotter$OutputFilesSnapshot

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 24.155 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-0.10.0-jdk7 #50

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3639; Configure default serdes upon construction

--
[...truncated 8717 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED
:streams:examples:checkstyleMain
:streams:examples:compileTestJava UP-TO-DATE
:streams:examples:processTestResources UP-TO-DATE
:streams:examples:testClasses UP-TO-DATE
:streams:examples:checkstyleTest UP-TO-DATE
:streams:examples:test UP-TO-DATE
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 0 mins 51.038 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava UP-TO-DATE
:kafka-0.10.0-jdk7:clients:processResources UP-TO-DATE
:kafka-0.10.0-jdk7:clients:classes UP-TO-DATE
:kafka-0.10.0-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-0.10.0-jdk7:clients:createVersionFile
:kafka-0.10.0-jdk7:clients:jar UP-TO-DATE
:kafka-0.10.0-jdk7:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of output files for task 'javadoc' during up-to-date 
check.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of output files 
for task 'javadoc' during up-to-date check.
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.createSnapshot(AbstractFileSnapshotTaskStateChanges.java:49)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.saveCurrent(OutputFilesTaskStateChanges.java:71)
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.snapshotAfterTask(AbstractFileSnapshotTaskStateChanges.java:77)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.snapshotAfterTask(OutputFilesTaskStateChanges.java:26)
at 

[jira] [Resolved] (KAFKA-3652) Return error response for unsupported version of ApiVersionsRequest

2016-05-04 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-3652.

Resolution: Fixed

Committed the PR to 0.10.0.

> Return error response for unsupported version of ApiVersionsRequest
> ---
>
> Key: KAFKA-3652
> URL: https://issues.apache.org/jira/browse/KAFKA-3652
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Discussion is in the PR https://github.com/apache/kafka/pull/1286. 
> Dont fail authentication (for SASL) or break connections (normal operation) 
> when an unsupported version of ApiVersionsRequest is received. Instead return 
> error response so that client can retry with earlier version of request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3652) Return error response for unsupported version of ApiVersionsRequest

2016-05-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271751#comment-15271751
 ] 

ASF GitHub Bot commented on KAFKA-3652:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1310


> Return error response for unsupported version of ApiVersionsRequest
> ---
>
> Key: KAFKA-3652
> URL: https://issues.apache.org/jira/browse/KAFKA-3652
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Discussion is in the PR https://github.com/apache/kafka/pull/1286. 
> Dont fail authentication (for SASL) or break connections (normal operation) 
> when an unsupported version of ApiVersionsRequest is received. Instead return 
> error response so that client can retry with earlier version of request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3652: Return error response for unsuppor...

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1310


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3101) Optimize Aggregation Outputs

2016-05-04 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-3101:
--

Assignee: Bill Bejeck

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3101) Optimize Aggregation Outputs

2016-05-04 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3101 started by Bill Bejeck.
--
> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #1250

2016-05-04 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #588

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3639; Configure default serdes upon construction

--
[...truncated 4363 lines...]

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED

kafka.admin.AclCommandTest > testAclCli PASSED

kafka.admin.AclCommandTest > testProducerConsumerCli PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED


[jira] [Updated] (KAFKA-3160) Kafka LZ4 framing code miscalculates header checksum

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3160:
---
Assignee: Dana Powers  (was: Magnus Edenhill)

> Kafka LZ4 framing code miscalculates header checksum
> 
>
> Key: KAFKA-3160
> URL: https://issues.apache.org/jira/browse/KAFKA-3160
> Project: Kafka
>  Issue Type: Bug
>  Components: compression
>Affects Versions: 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.8.2.2, 0.9.0.1
>Reporter: Dana Powers
>Assignee: Dana Powers
>Priority: Critical
>  Labels: compatibility, compression, lz4
> Fix For: 0.10.0.0
>
>
> KAFKA-1493 partially implements the LZ4 framing specification, but it 
> incorrectly calculates the header checksum. This causes 
> KafkaLZ4BlockInputStream to raise an error 
> [IOException(DESCRIPTOR_HASH_MISMATCH)] if a client sends *correctly* framed 
> LZ4 data. It also causes KafkaLZ4BlockOutputStream to generate incorrectly 
> framed LZ4 data, which means clients decoding LZ4 messages from kafka will 
> always receive incorrectly framed data.
> Specifically, the current implementation includes the 4-byte MagicNumber in 
> the checksum, which is incorrect.
> http://cyan4973.github.io/lz4/lz4_Frame_format.html
> Third-party clients that attempt to use off-the-shelf lz4 framing find that 
> brokers reject messages as having a corrupt checksum. So currently non-java 
> clients must 'fixup' lz4 packets to deal with the broken checksum.
> Magnus first identified this issue in librdkafka; kafka-python has the same 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #49

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA 3656: Remove logging outstanding messages when producer flush

--
[...truncated 8289 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED
:streams:examples:checkstyleMain
:streams:examples:compileTestJava UP-TO-DATE
:streams:examples:processTestResources UP-TO-DATE
:streams:examples:testClasses UP-TO-DATE
:streams:examples:checkstyleTest UP-TO-DATE
:streams:examples:test UP-TO-DATE
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 0 mins 45.523 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava UP-TO-DATE
:kafka-0.10.0-jdk7:clients:processResources UP-TO-DATE
:kafka-0.10.0-jdk7:clients:classes UP-TO-DATE
:kafka-0.10.0-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-0.10.0-jdk7:clients:createVersionFile
:kafka-0.10.0-jdk7:clients:jar UP-TO-DATE
:kafka-0.10.0-jdk7:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of output files for task 'javadoc' during up-to-date 
check.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of output files 
for task 'javadoc' during up-to-date check.
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.createSnapshot(AbstractFileSnapshotTaskStateChanges.java:49)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.saveCurrent(OutputFilesTaskStateChanges.java:71)
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.snapshotAfterTask(AbstractFileSnapshotTaskStateChanges.java:77)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.snapshotAfterTask(OutputFilesTaskStateChanges.java:26)
at 

Re: [VOTE] KIP-57: Interoperable LZ4 Framing

2016-05-04 Thread Jun Rao
Thanks for the response. +1 on the KIP.

Jun

On Thu, Apr 28, 2016 at 9:01 AM, Dana Powers  wrote:

> Sure thing. Yes, the substantive change is fixing the HC checksum.
>
> But to further improve interoperability, the kafka LZ4 class would no
> longer reject messages that have these optional header flags set. The
> flags might get set if the client/user chooses to use a non-java lz4
> compression library that includes them. In practice, naive support for
> the flags just means reading a few extra bytes in the header and/or
> footer of the payload. The KIP does not intend to use or validate this
> extra data.
>
> ContentSize is described as: "This field has no impact on decoding, it
> just informs the decoder how much data the frame holds (for example,
> to display it during decoding process, or for verification purpose).
> It can be safely skipped by a conformant decoder." We skip it.
>
> ContentChecksum is "Content Checksum validates the result, that all
> blocks were fully transmitted in the correct order and without error,
> and also that the encoding/decoding process itself generated no
> distortion." We skip it.
>
> -Dana
>
>
> On Thu, Apr 28, 2016 at 7:43 AM, Jun Rao  wrote:
> > Hi, Dana,
> >
> > Could you explain the following from the KIP a bit more? The KIP is
> > intended to just fix the HC checksum, but the following seems to suggest
> > there are other format changes?
> >
> > KafkaLZ4* code:
> >
> >- add naive support for optional header flags (ContentSize,
> >ContentChecksum) to enable interoperability with off-the-shelf lz4
> libraries
> >- the only flag left unsupported is dependent-block compression, which
> >our implementation does not currently support.
> >
> >
> > Thanks,
> >
> > Jun
> >
> > On Mon, Apr 25, 2016 at 2:26 PM, Dana Powers 
> wrote:
> >
> >> Hi all,
> >>
> >> Initiating a vote thread because the KIP-57 proposal is specific to
> >> the 0.10 release.
> >>
> >> KIP-57 can be accessed here:
> >> <
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-57+-+Interoperable+LZ4+Framing
> >> >.
> >>
> >> The related JIRA is https://issues.apache.org/jira/browse/KAFKA-3160
> >> and working github PR at https://github.com/apache/kafka/pull/1212
> >>
> >> The vote will run for 72 hours.
> >>
> >> +1 (non-binding)
> >>
>


[jira] [Updated] (KAFKA-3659) Consumer does not handle coordinator connection blackout period gracefully

2016-05-04 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3659:
---
Summary: Consumer does not handle coordinator connection blackout period 
gracefully  (was: Consumer handle coordinator connection blackout period)

> Consumer does not handle coordinator connection blackout period gracefully
> --
>
> Key: KAFKA-3659
> URL: https://issues.apache.org/jira/browse/KAFKA-3659
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> Currently when the connection to the coordinator is closed, the consumer will 
> immediately try to rediscover the coordinator and reconnect to it. This is 
> fine as it is, but the NetworkClient enforces a blackout period before it 
> will allow the reconnect to be attempted. This causes the following cycle 
> which continues in a fairly tight loop until the blackout period has 
> completed:
> 1. Notice connection failure (i.e. DISCONNECTED state in ConnectionStates)
> 2. Send GroupCoordinator request to rediscover coordinator.
> 3. Attempt to connect to coordinator.
> 4. Go back to 1.
> To fix this, we should avoid rediscovery while the connection is blacked out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3659) Consumer handle coordinator connection blackout period

2016-05-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-3659:
--

 Summary: Consumer handle coordinator connection blackout period
 Key: KAFKA-3659
 URL: https://issues.apache.org/jira/browse/KAFKA-3659
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.9.0.1, 0.9.0.0
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Currently when the connection to the coordinator is closed, the consumer will 
immediately try to rediscover the coordinator and reconnect to it. This is fine 
as it is, but the NetworkClient enforces a blackout period before it will allow 
the reconnect to be attempted. This causes the following cycle which continues 
in a fairly tight loop until the blackout period has completed:

1. Notice connection failure (i.e. DISCONNECTED state in ConnectionStates)
2. Send GroupCoordinator request to rediscover coordinator.
3. Attempt to connect to coordinator.
4. Go back to 1.

To fix this, we should avoid rediscovery while the connection is blacked out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3639) Configure default serdes passed via StreamsConfig

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3639.

Resolution: Fixed

Issue resolved by pull request 1311
[https://github.com/apache/kafka/pull/1311]

> Configure default serdes passed via StreamsConfig
> -
>
> Key: KAFKA-3639
> URL: https://issues.apache.org/jira/browse/KAFKA-3639
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: api
> Fix For: 0.10.0.0
>
>
> For default serde classes passed via configs, their {{configure()}} function 
> are not triggered before using. This makes the default serde not usable, for 
> example, AvroSerializer where users may need to pass in a schema register 
> client. We need to provide the interface where users can pass in the 
> key-value map configs for the default serde classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3639) Configure default serdes passed via StreamsConfig

2016-05-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271623#comment-15271623
 ] 

ASF GitHub Bot commented on KAFKA-3639:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1311


> Configure default serdes passed via StreamsConfig
> -
>
> Key: KAFKA-3639
> URL: https://issues.apache.org/jira/browse/KAFKA-3639
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: api
> Fix For: 0.10.0.0
>
>
> For default serde classes passed via configs, their {{configure()}} function 
> are not triggered before using. This makes the default serde not usable, for 
> example, AvroSerializer where users may need to pass in a schema register 
> client. We need to provide the interface where users can pass in the 
> key-value map configs for the default serde classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3639: Configure default serdes upon cons...

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1311


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #587

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA 3656: Remove logging outstanding messages when producer flush

--
[...truncated 114 lines...]
1 warning

:kafka-trunk-jdk8:clients:processTestResources
:kafka-trunk-jdk8:clients:testClasses
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:jar_core_2_11
Building project 'core' with Scala version 2.11.8
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:401:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:56:
 no valid targets for annotation on variable _file - it is discarded unused. 
You may specify targets with meta-annotations, e.g. @(volatile @param)
class OffsetIndex(@volatile private[this] var _file: File, val baseOffset: 
Long, val maxIndexSize: Int = -1) extends Logging {
   ^
:301:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:246:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
Console.readLine().equalsIgnoreCase("y")
^
:377:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
if (!Console.readLine().equalsIgnoreCase("y")) {
 ^
:93:
 class ProducerConfig in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.ProducerConfig instead.
val producerConfig = new ProducerConfig(props)
 ^
:94:
 method fetchTopicMetadata in object ClientUtils is deprecated: This method has 
been deprecated and will be removed in a future release.
fetchTopicMetadata(topics, brokers, producerConfig, correlationId)
^
:396:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 

Build failed in Jenkins: kafka-trunk-jdk8 #586

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2236; Offset request reply racing with segment rolling

--
[...truncated 8588 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED
:streams:examples:checkstyleMain
:streams:examples:compileTestJava UP-TO-DATE
:streams:examples:processTestResources UP-TO-DATE
:streams:examples:testClasses UP-TO-DATE
:streams:examples:checkstyleTest UP-TO-DATE
:streams:examples:test UP-TO-DATE
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 0 mins 1.825 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of output files for task 'javadoc' during up-to-date 
check.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of output files 
for task 'javadoc' during up-to-date check.
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.createSnapshot(AbstractFileSnapshotTaskStateChanges.java:49)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.saveCurrent(OutputFilesTaskStateChanges.java:71)
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.snapshotAfterTask(AbstractFileSnapshotTaskStateChanges.java:77)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.snapshotAfterTask(OutputFilesTaskStateChanges.java:26)
at 

[jira] [Updated] (KAFKA-3656) Avoid stressing system more when already under stress

2016-05-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3656:
-
   Resolution: Fixed
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 0.10.0.0
   0.10.1.0
   Status: Resolved  (was: Patch Available)

> Avoid stressing system more when already under stress
> -
>
> Key: KAFKA-3656
> URL: https://issues.apache.org/jira/browse/KAFKA-3656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Alexey Raga
>Assignee: Liquan Pei
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> I am working with Kafka Connect now and I am having error messages like that:
> {code}
> [2016-05-04 03:11:28,226] ERROR Failed to flush 
> WorkerSourceTask{id=geo-connector-0}, timed out while waiting for producer to 
> flush outstanding messages, 151860 left ([FAILED toString()]) 
> (org.apache.kafka.connect.runtime.WorkerSourceTask:237)
> [2016-05-04 03:11:28,227] ERROR Failed to commit offsets for 
> WorkerSourceTask{id=geo-connector-0} 
> (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
> {code}
> I didn't figure out the reason why Connect would pull so many records into 
> memory when it clearly can't produce that fast and I don't yet know why 
> producing messages is slow.
> But the part of {{151860 left ([FAILED toString()]) }} is interesting and I 
> looked at the code and found this:
> {code}
> if (timeoutMs <= 0) {
> log.error(
> "Failed to flush {}, timed out while waiting 
> for producer to flush outstanding "
> + "messages, {} left ({})", this, 
> outstandingMessages.size(), outstandingMessages);
> finishFailedFlush();
> return false;
> }
> {code}
> So when the connector is under stress and, assuming {{151860}} messages, 
> under a heavy memory pressure the code choses to take pretty much {{4 * 
> 151860}} byte arrays and to convert it to a java string.
> This not only eats more memory and adds to GC, but is also useless for 
> logging because the actual string, if it wouldn't fail, would look like:
> {code}
> (topic=lamington--geo-connector, partition=null, key=null, 
> value=[B@62c66f62=ProducerRecord(topic=lamington--geo-connector, 
> partition=null, key=null, value=[B@62c66f62, 
> ProducerRecord(topic=lamington--geo-connector, partition=null, key=null, .
> {code}
> I think it is a bug and a string representation of the outstanding messages 
> should be removed from the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: added description for lag in consumer group co...

2016-05-04 Thread coughman
GitHub user coughman opened a pull request:

https://github.com/apache/kafka/pull/1320

added description for lag in consumer group command



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/coughman/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1320.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1320


commit 61d16565ae19f2f9a6190c38232d6aadad3b2bf3
Author: Kaufman Ng 
Date:   2016-05-04T23:06:45Z

added description for lag in consumer group command




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA 3656: Remove logging outstanding message...

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1319


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3658) RocksDBWindowStore should guarantee a single window locates completely in one segment

2016-05-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3658:
-
Fix Version/s: (was: 0.10.0.0)
   0.10.0.1

> RocksDBWindowStore should guarantee a single window locates completely in one 
> segment
> -
>
> Key: KAFKA-3658
> URL: https://issues.apache.org/jira/browse/KAFKA-3658
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.0.1
>
>
> As [~h...@pinterest.com] found out, the current implementation of 
> {{RocksDBWindowStore}} does not guarantee a single window locates completely 
> in one segment, and hence when we expiring a segment, that would result in 
> partial window expiration (i.e. some records of the window are dropped, while 
> some others are still available for queries). We need to fix this issue in 
> setting the segment size to consider the window size.
> Another minor issue is that retention size should be validated correctly to 
> be no less than the window size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3658) RocksDBWindowStore should guarantee a single window locates completely in one segment

2016-05-04 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3658:


 Summary: RocksDBWindowStore should guarantee a single window 
locates completely in one segment
 Key: KAFKA-3658
 URL: https://issues.apache.org/jira/browse/KAFKA-3658
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Fix For: 0.10.0.0


As [~h...@pinterest.com] found out, the current implementation of 
{{RocksDBWindowStore}} does not guarantee a single window locates completely in 
one segment, and hence when we expiring a segment, that would result in partial 
window expiration (i.e. some records of the window are dropped, while some 
others are still available for queries). We need to fix this issue in setting 
the segment size to consider the window size.

Another minor issue is that retention size should be validated correctly to be 
no less than the window size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1249

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2236; Offset request reply racing with segment rolling

--
[...truncated 8610 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED
:streams:examples:checkstyleMain
:streams:examples:compileTestJava UP-TO-DATE
:streams:examples:processTestResources UP-TO-DATE
:streams:examples:testClasses UP-TO-DATE
:streams:examples:checkstyleTest UP-TO-DATE
:streams:examples:test UP-TO-DATE
:testAll

BUILD SUCCESSFUL

Total time: 58 mins 59.823 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of output files for task 'javadoc' during up-to-date 
check.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of output files 
for task 'javadoc' during up-to-date check.
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.createSnapshot(AbstractFileSnapshotTaskStateChanges.java:49)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.saveCurrent(OutputFilesTaskStateChanges.java:71)
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.snapshotAfterTask(AbstractFileSnapshotTaskStateChanges.java:77)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.snapshotAfterTask(OutputFilesTaskStateChanges.java:26)
at 

Build failed in Jenkins: kafka-0.10.0-jdk7 #48

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2236; Offset request reply racing with segment rolling

--
[...truncated 39 lines...]
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:streams:examples:clean
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJavaNote: Some input files use unchecked or 
unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-0.10.0-jdk7:clients:processResources UP-TO-DATE
:kafka-0.10.0-jdk7:clients:classes
:kafka-0.10.0-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-0.10.0-jdk7:clients:createVersionFile
:kafka-0.10.0-jdk7:clients:jar
:kafka-0.10.0-jdk7:core:compileJava UP-TO-DATE
:kafka-0.10.0-jdk7:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:401:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:301:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:93:
 class ProducerConfig in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.ProducerConfig instead.
val producerConfig = new ProducerConfig(props)
 ^
:94:
 method fetchTopicMetadata in object ClientUtils is deprecated: This method has 
been deprecated and will be removed in a future release.
fetchTopicMetadata(topics, brokers, producerConfig, correlationId)
^
:396:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:191:
 object ProducerRequestStatsRegistry in package producer is deprecated: This 
object has been deprecated and will be removed in a future release.
ProducerRequestStatsRegistry.removeProducerRequestStats(clientId)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
:301:
 value timestamp in class PartitionData is 

[jira] [Updated] (KAFKA-3627) New consumer doesn't run delayed tasks while under load

2016-05-04 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3627:
---
Priority: Blocker  (was: Critical)

> New consumer doesn't run delayed tasks while under load
> ---
>
> Key: KAFKA-3627
> URL: https://issues.apache.org/jira/browse/KAFKA-3627
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Rob Underwood
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
> Attachments: DelayedTaskBugConsumer.java, kafka-3627-output.log
>
>
> If the new consumer receives a steady flow of fetch responses it will not run 
> delayed tasks, which means it will not heartbeat or perform automatic offset 
> commits.
> The main cause is the code that attempts to pipeline fetch responses and keep 
> the consumer fed.  Specifically, in KafkaConsumer::pollOnce() there is a 
> check that skips calling client.poll() if there are fetched records ready 
> (line 903 in the 0.9.0 branch of this writing).  Then in 
> KafkaConsumer::poll(), if records are returned it will initiate another fetch 
> and perform a quick poll, which will send/receive fetch requests/responses 
> but will not run delayed tasks.
> If the timing works out, and the consumer is consistently receiving fetched 
> records, it won't run delayed tasks until it doesn't receive a fetch response 
> during its quick poll.  That leads to a rebalance since the consumer isn't 
> heartbeating, and typically means all the consumed records will be 
> re-delivered since the automatic offset commit wasn't able to run either.
> h5. Steps to reproduce
> # Start up a cluster with *at least 2 brokers*.  This seems to be required to 
> reproduce the issue, I'm guessing because the fetch responses all arrive 
> together when using a single broker.
> # Create a topic with a good number of partitions
> #* bq. bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic 
> delayed-task-bug --partitions 10 --replication-factor 1
> # Generate some test data so the consumer has plenty to consume.  In this 
> case I'm just using uuids
> #* bq. for ((i=0;i<100;++i)) do; cat /proc/sys/kernel/random/uuid >>  
> /tmp/test-messages; done
> #* bq. bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
> delayed-task-bug < /tmp/test-messages
> # Start up a consumer with a small max fetch size to ensure it only pulls a 
> few records at a time.  The consumer can simply sleep for a moment when it 
> receives a record.
> #* I'll attach an example in Java
> # There's a timing aspect to this issue so it may take a few attempts to 
> reproduce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #47

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Modify checkstyle to allow import classes only used in javadoc

[wangguoz] MINOR: Added more integration tests

[wangguoz] KAFKA-3642: Fix NPE from ProcessorStateManager when the changelog 
topic

--
[...truncated 6357 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED
:streams:examples:checkstyleMain
:streams:examples:compileTestJava UP-TO-DATE
:streams:examples:processTestResources UP-TO-DATE
:streams:examples:testClasses UP-TO-DATE
:streams:examples:checkstyleTest UP-TO-DATE
:streams:examples:test UP-TO-DATE
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 0 mins 32.77 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava UP-TO-DATE
:kafka-0.10.0-jdk7:clients:processResources UP-TO-DATE
:kafka-0.10.0-jdk7:clients:classes UP-TO-DATE
:kafka-0.10.0-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-0.10.0-jdk7:clients:createVersionFile
:kafka-0.10.0-jdk7:clients:jar UP-TO-DATE
:kafka-0.10.0-jdk7:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of output files for task 'javadoc' during up-to-date 
check.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of output files 
for task 'javadoc' during up-to-date check.
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.createSnapshot(AbstractFileSnapshotTaskStateChanges.java:49)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.saveCurrent(OutputFilesTaskStateChanges.java:71)
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.snapshotAfterTask(AbstractFileSnapshotTaskStateChanges.java:77)
at 

[jira] [Commented] (KAFKA-3525) max.reserved.broker.id off-by-one error

2016-05-04 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271527#comment-15271527
 ] 

Gwen Shapira commented on KAFKA-3525:
-

This will wait for the next release. The max reserved is very high and 
user-configurable.

> max.reserved.broker.id off-by-one error
> ---
>
> Key: KAFKA-3525
> URL: https://issues.apache.org/jira/browse/KAFKA-3525
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Alan Braithwaite
>Assignee: Manikumar Reddy
> Fix For: 0.10.1.0
>
>
> There's an off-by-one error in the config check / id generation for 
> max.reserved.broker.id setting.  The auto-generation will generate 
> max.reserved.broker.id as the initial broker id as it's currently written.
> Not sure what the consequences of this are if there's already a broker with 
> that id as I didn't test that behavior.
> This can return 0 + max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/utils/ZkUtils.scala#L213-L215
> However, this does a <= check, which is inclusive of max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/server/KafkaConfig.scala#L984-L986



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3525) max.reserved.broker.id off-by-one error

2016-05-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3525:

Fix Version/s: (was: 0.10.0.0)
   0.10.1.0

> max.reserved.broker.id off-by-one error
> ---
>
> Key: KAFKA-3525
> URL: https://issues.apache.org/jira/browse/KAFKA-3525
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Alan Braithwaite
>Assignee: Manikumar Reddy
> Fix For: 0.10.1.0
>
>
> There's an off-by-one error in the config check / id generation for 
> max.reserved.broker.id setting.  The auto-generation will generate 
> max.reserved.broker.id as the initial broker id as it's currently written.
> Not sure what the consequences of this are if there's already a broker with 
> that id as I didn't test that behavior.
> This can return 0 + max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/utils/ZkUtils.scala#L213-L215
> However, this does a <= check, which is inclusive of max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/server/KafkaConfig.scala#L984-L986



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2236) offset request reply racing with segment rolling

2016-05-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2236:
-
   Resolution: Fixed
Fix Version/s: 0.10.0.0
   Status: Resolved  (was: Patch Available)

> offset request reply racing with segment rolling
> 
>
> Key: KAFKA-2236
> URL: https://issues.apache.org/jira/browse/KAFKA-2236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: Linux x86_64, java.1.7.0_72, discovered using librdkafka 
> based client.
>Reporter: Alfred Landrum
>Assignee: William Thurston
>Priority: Critical
>  Labels: newbie
> Fix For: 0.10.0.0
>
>
> My use case with kafka involves an aggressive retention policy that rolls 
> segment files frequently. My librdkafka based client sees occasional errors 
> to offset requests, showing up in the broker log like:
> [2015-06-02 02:33:38,047] INFO Rolled new log segment for 
> 'receiver-93b40462-3850-47c1-bcda-8a3e221328ca-50' in 1 ms. (kafka.log.Log)
> [2015-06-02 02:33:38,049] WARN [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis)
> java.lang.ArrayIndexOutOfBoundsException: 3
> at kafka.server.KafkaApis.fetchOffsetsBefore(KafkaApis.scala:469)
> at kafka.server.KafkaApis.fetchOffsets(KafkaApis.scala:449)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:411)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:402)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.server.KafkaApis.handleOffsetRequest(KafkaApis.scala:402)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
> at java.lang.Thread.run(Thread.java:745)
> quoting Guozhang Wang's reply to my query on the users list:
> "I check the 0.8.2 code and may probably find a bug related to your issue.
> Basically, segsArray.last.size is called multiple times during handling
> offset requests, while segsArray.last could get concurrent appends. Hence
> it is possible that in line 461, if(segsArray.last.size > 0) returns false
> while later in line 468, if(segsArray.last.size > 0) could return true."
> http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAHwHRrUK-3wdoEAaFbsD0E859Ea0gXixfxgDzF8E3%3D_8r7K%2Bpw%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2236) offset request reply racing with segment rolling

2016-05-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2236:
-
Assignee: William Thurston  (was: Jason Gustafson)

> offset request reply racing with segment rolling
> 
>
> Key: KAFKA-2236
> URL: https://issues.apache.org/jira/browse/KAFKA-2236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: Linux x86_64, java.1.7.0_72, discovered using librdkafka 
> based client.
>Reporter: Alfred Landrum
>Assignee: William Thurston
>Priority: Critical
>  Labels: newbie
>
> My use case with kafka involves an aggressive retention policy that rolls 
> segment files frequently. My librdkafka based client sees occasional errors 
> to offset requests, showing up in the broker log like:
> [2015-06-02 02:33:38,047] INFO Rolled new log segment for 
> 'receiver-93b40462-3850-47c1-bcda-8a3e221328ca-50' in 1 ms. (kafka.log.Log)
> [2015-06-02 02:33:38,049] WARN [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis)
> java.lang.ArrayIndexOutOfBoundsException: 3
> at kafka.server.KafkaApis.fetchOffsetsBefore(KafkaApis.scala:469)
> at kafka.server.KafkaApis.fetchOffsets(KafkaApis.scala:449)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:411)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:402)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.server.KafkaApis.handleOffsetRequest(KafkaApis.scala:402)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
> at java.lang.Thread.run(Thread.java:745)
> quoting Guozhang Wang's reply to my query on the users list:
> "I check the 0.8.2 code and may probably find a bug related to your issue.
> Basically, segsArray.last.size is called multiple times during handling
> offset requests, while segsArray.last could get concurrent appends. Hence
> it is possible that in line 461, if(segsArray.last.size > 0) returns false
> while later in line 468, if(segsArray.last.size > 0) could return true."
> http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAHwHRrUK-3wdoEAaFbsD0E859Ea0gXixfxgDzF8E3%3D_8r7K%2Bpw%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2236) offset request reply racing with segment rolling

2016-05-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271465#comment-15271465
 ] 

ASF GitHub Bot commented on KAFKA-2236:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1318


> offset request reply racing with segment rolling
> 
>
> Key: KAFKA-2236
> URL: https://issues.apache.org/jira/browse/KAFKA-2236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: Linux x86_64, java.1.7.0_72, discovered using librdkafka 
> based client.
>Reporter: Alfred Landrum
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: newbie
>
> My use case with kafka involves an aggressive retention policy that rolls 
> segment files frequently. My librdkafka based client sees occasional errors 
> to offset requests, showing up in the broker log like:
> [2015-06-02 02:33:38,047] INFO Rolled new log segment for 
> 'receiver-93b40462-3850-47c1-bcda-8a3e221328ca-50' in 1 ms. (kafka.log.Log)
> [2015-06-02 02:33:38,049] WARN [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis)
> java.lang.ArrayIndexOutOfBoundsException: 3
> at kafka.server.KafkaApis.fetchOffsetsBefore(KafkaApis.scala:469)
> at kafka.server.KafkaApis.fetchOffsets(KafkaApis.scala:449)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:411)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:402)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.server.KafkaApis.handleOffsetRequest(KafkaApis.scala:402)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
> at java.lang.Thread.run(Thread.java:745)
> quoting Guozhang Wang's reply to my query on the users list:
> "I check the 0.8.2 code and may probably find a bug related to your issue.
> Basically, segsArray.last.size is called multiple times during handling
> offset requests, while segsArray.last could get concurrent appends. Hence
> it is possible that in line 461, if(segsArray.last.size > 0) returns false
> while later in line 468, if(segsArray.last.size > 0) could return true."
> http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAHwHRrUK-3wdoEAaFbsD0E859Ea0gXixfxgDzF8E3%3D_8r7K%2Bpw%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2236; Offset request reply racing with s...

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1318


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1248

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3632; remove fetcher metrics on shutdown and leader migration

--
[...truncated 8287 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED
:streams:examples:checkstyleMain
:streams:examples:compileTestJava UP-TO-DATE
:streams:examples:processTestResources UP-TO-DATE
:streams:examples:testClasses UP-TO-DATE
:streams:examples:checkstyleTest UP-TO-DATE
:streams:examples:test UP-TO-DATE
:testAll

BUILD SUCCESSFUL

Total time: 59 mins 45.537 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of output files for task 'javadoc' during up-to-date 
check.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of output files 
for task 'javadoc' during up-to-date check.
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.createSnapshot(AbstractFileSnapshotTaskStateChanges.java:49)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.saveCurrent(OutputFilesTaskStateChanges.java:71)
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.snapshotAfterTask(AbstractFileSnapshotTaskStateChanges.java:77)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.snapshotAfterTask(OutputFilesTaskStateChanges.java:26)
at 

list of challenges encountered using 0.9.0.1

2016-05-04 Thread Cliff Rhyne
While at the Kafka Summit I was asked to write up a list of challenges and
confusions my team encountered using Kafka.  We are using 0.9.0.1 and use
the new Java KakfaConsumer.


   1. The new Java KafkaConsumer doesn’t have a method to return the high
   watermark (last offset in the topic/partition's log.
   2. Can’t connect using the Java client to just check status on topics
   (committed offset for different consumer groups, high watermark, etc)
   3. kafka-consumer-groups.sh requires a member of the consumer group to
   be connected and consuming or offset values won't be displayed (artificial
   prerequisite)
   4. Default config for tracking committed offsets is poor (commits should
   be very permanent shouldn’t age out after 24 hours).
   5. It should not be possible to set an offset.retention time <
   log.retention time.
   6. Consumer group rebalances affect all consumers across all topics
   within the consumer group including topics without a new subscriber.
   7. Changing the broker config requires a 1-at-a-time roll of all the
   cluster, a service kafka reload would be nice.
   8. Console consumer still uses “old” consumer style configuration
   options (--zookeeper). This is a bit strange for anyone who has started
   using Kafka with version 0.9 or later, since the cli options don’t
   correspond to what you expect the consumer to need.
   9. Heartbeat only on poll() causes problems when we have gaps in
   consuming before committing (such as when we publish files and don’t want
   to commit until the publish is complete).  Supposedly position() will
   perform a heartbeat too in addition to poll() (I haven’t verified this but
   heard it at the Kafka Summit), but it does add extra complexity to the
   application.


Thanks for listening,
Cliff Rhyne

-- 
Cliff Rhyne
Software Engineering Manager
e: crh...@signal.co
signal.co


Cut Through the Noise

This e-mail and any files transmitted with it are for the sole use of the
intended recipient(s) and may contain confidential and privileged
information. Any unauthorized use of this email is strictly prohibited.
©2016 Signal. All rights reserved.


Build failed in Jenkins: kafka-0.10.0-jdk7 #46

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3632; remove fetcher metrics on shutdown and leader migration

--
[...truncated 8606 lines...]

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED
:streams:examples:checkstyleMain
:streams:examples:compileTestJava UP-TO-DATE
:streams:examples:processTestResources UP-TO-DATE
:streams:examples:testClasses UP-TO-DATE
:streams:examples:checkstyleTest UP-TO-DATE
:streams:examples:test UP-TO-DATE
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 0 mins 7.135 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.13/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
Build file ': 
line 230
useAnt has been deprecated and is scheduled to be removed in Gradle 3.0. The 
Ant-Based Scala compiler is deprecated, please see 
https://docs.gradle.org/current/userguide/scala_plugin.html.
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-0.10.0-jdk7:clients:compileJava UP-TO-DATE
:kafka-0.10.0-jdk7:clients:processResources UP-TO-DATE
:kafka-0.10.0-jdk7:clients:classes UP-TO-DATE
:kafka-0.10.0-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-0.10.0-jdk7:clients:createVersionFile
:kafka-0.10.0-jdk7:clients:jar UP-TO-DATE
:kafka-0.10.0-jdk7:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of output files for task 'javadoc' during up-to-date 
check.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Failed to capture snapshot of output files 
for task 'javadoc' during up-to-date check.
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.createSnapshot(AbstractFileSnapshotTaskStateChanges.java:49)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.saveCurrent(OutputFilesTaskStateChanges.java:71)
at 
org.gradle.api.internal.changedetection.rules.AbstractFileSnapshotTaskStateChanges.snapshotAfterTask(AbstractFileSnapshotTaskStateChanges.java:77)
at 
org.gradle.api.internal.changedetection.rules.OutputFilesTaskStateChanges.snapshotAfterTask(OutputFilesTaskStateChanges.java:26)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #585

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Modify checkstyle to allow import classes only used in javadoc

[ismael] KAFKA-3632; remove fetcher metrics on shutdown and leader migration

--
[...truncated 4398 lines...]

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.TopicCommandTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandTest > testCreateAlterTopicWithRackAware PASSED

kafka.admin.TopicCommandTest > testTopicDeletion PASSED

kafka.admin.TopicCommandTest > testConfigPreservationAcrossPartitionAlteration 
PASSED

kafka.admin.TopicCommandTest > testAlterIfExists PASSED

kafka.admin.TopicCommandTest > testDeleteIfExists PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED

kafka.admin.AclCommandTest > testAclCli PASSED

kafka.admin.AclCommandTest > testProducerConsumerCli PASSED

kafka.admin.ReassignPartitionsCommandTest > testRackAwareReassign PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > 

Jenkins build is back to normal : kafka-trunk-jdk7 #1247

2016-05-04 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-3632) ConsumerLag metrics persist after partition migration

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3632:
---
   Resolution: Fixed
Fix Version/s: 0.10.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1312
[https://github.com/apache/kafka/pull/1312]

> ConsumerLag metrics persist after partition migration
> -
>
> Key: KAFKA-3632
> URL: https://issues.apache.org/jira/browse/KAFKA-3632
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.2, 0.9.0.1
> Environment: JDK 1.8, Linux
>Reporter: Brian Lueck
>Assignee: Jason Gustafson
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> When a partition is migrated away from a broker, the ConsumerLag metric for 
> the topic/partition gets 'stuck' at the current value. The only way to remove 
> the metric is to restart the broker.
> This appears to be because in AbstractFetcherThread.scala there is no way of 
> removing a metric. See...
> {code}
> class FetcherLagStats(metricId: ClientIdAndBroker) { 
> private val valueFactory = (k: ClientIdTopicPartition) => new 
> FetcherLagMetrics(k) 
> val stats = new Pool[ClientIdTopicPartition, 
> FetcherLagMetrics](Some(valueFactory))
> def getFetcherLagStats(topic: String, partitionId: Int): FetcherLagMetrics = 
> { 
> stats.getAndMaybePut(new ClientIdTopicPartition(metricId.clientId, topic, 
> partitionId)) 
> } 
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3632) ConsumerLag metrics persist after partition migration

2016-05-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271233#comment-15271233
 ] 

ASF GitHub Bot commented on KAFKA-3632:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1312


> ConsumerLag metrics persist after partition migration
> -
>
> Key: KAFKA-3632
> URL: https://issues.apache.org/jira/browse/KAFKA-3632
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.2, 0.9.0.1
> Environment: JDK 1.8, Linux
>Reporter: Brian Lueck
>Assignee: Jason Gustafson
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> When a partition is migrated away from a broker, the ConsumerLag metric for 
> the topic/partition gets 'stuck' at the current value. The only way to remove 
> the metric is to restart the broker.
> This appears to be because in AbstractFetcherThread.scala there is no way of 
> removing a metric. See...
> {code}
> class FetcherLagStats(metricId: ClientIdAndBroker) { 
> private val valueFactory = (k: ClientIdTopicPartition) => new 
> FetcherLagMetrics(k) 
> val stats = new Pool[ClientIdTopicPartition, 
> FetcherLagMetrics](Some(valueFactory))
> def getFetcherLagStats(topic: String, partitionId: Int): FetcherLagMetrics = 
> { 
> stats.getAndMaybePut(new ClientIdTopicPartition(metricId.clientId, topic, 
> partitionId)) 
> } 
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3632: remove fetcher metrics on shutdown...

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1312


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3565) Producer's throughput lower with compressed data after KIP-31/32

2016-05-04 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271185#comment-15271185
 ] 

Jun Rao commented on KAFKA-3565:


[~becket_qin], thanks for the results. As I was looking at the results with 
linger.ms=1, I was expecting the consumer throughput to be more or less the 
same between trunk and 0.9.0 since the batch sizes are about the same in those 
tests. Half of the results are actually like that. However, the other half 
looks a bit weird. It seems that trunk can be either much better or worse than 
0.9.0. Do you have a good explanation on those? How reliable are those numbers? 

{code}
max.in.flight.requests.per.connection=1, valueBound=500, linger.ms=10, 
messageSize=1000, compression.type=gzip (30.4 > 16.0)
max.in.flight.requests.per.connection=1, valueBound=500, linger.ms=10, 
messageSize=1000, compression.type=snappy (47.2 < 61.0)
max.in.flight.requests.per.connection=1, valueBound=5000, linger.ms=10, 
messageSize=100, compression.type=gzip (47.7 > 33.3)
max.in.flight.requests.per.connection=1, valueBound=5000, linger.ms=10, 
messageSize=100, compression.type=snappy (28.6 > 21.8)
{code}

> Producer's throughput lower with compressed data after KIP-31/32
> 
>
> Key: KAFKA-3565
> URL: https://issues.apache.org/jira/browse/KAFKA-3565
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Relative offsets were introduced by KIP-31 so that the broker does not have 
> to recompress data (this was previously required after offsets were 
> assigned). The implicit assumption is that reducing CPU usage required by 
> recompression would mean that producer throughput for compressed data would 
> increase.
> However, this doesn't seem to be the case:
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--012.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   59.030 seconds
> {"records_per_sec": 519418.343653, "mb_per_sec": 49.54}
> {code}
> Full results: https://gist.github.com/ijuma/0afada4ff51ad6a5ac2125714d748292
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--013.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100.compression_type=snappy
> status: PASS
> run time:   1 minute 0.243 seconds
> {"records_per_sec": 427308.818848, "mb_per_sec": 40.75}
> {code}
> Full results: https://gist.github.com/ijuma/e49430f0548c4de5691ad47696f5c87d
> The difference for the uncompressed case is smaller (and within what one 
> would expect given the additional size overhead caused by the timestamp 
> field):
> {code}
> Commit: eee95228fabe1643baa016a2d49fb0a9fe2c66bd (one before KIP-31/32)
> test_id:
> 2016-04-15--010.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 4.176 seconds
> {"records_per_sec": 321018.17747, "mb_per_sec": 30.61}
> {code}
> Full results: https://gist.github.com/ijuma/5fec369d686751a2d84debae8f324d4f
> {code}
> Commit: fa594c811e4e329b6e7b897bce910c6772c46c0f (KIP-31/32)
> test_id:
> 2016-04-15--014.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100
> status: PASS
> run time:   1 minute 5.079 seconds
> {"records_per_sec": 291777.608696, "mb_per_sec": 27.83}
> {code}
> Full results: https://gist.github.com/ijuma/1d35bd831ff9931448b0294bd9b787ed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2236) offset request reply racing with segment rolling

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2236:
---
Reviewer: Guozhang Wang
  Status: Patch Available  (was: Open)

> offset request reply racing with segment rolling
> 
>
> Key: KAFKA-2236
> URL: https://issues.apache.org/jira/browse/KAFKA-2236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: Linux x86_64, java.1.7.0_72, discovered using librdkafka 
> based client.
>Reporter: Alfred Landrum
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: newbie
>
> My use case with kafka involves an aggressive retention policy that rolls 
> segment files frequently. My librdkafka based client sees occasional errors 
> to offset requests, showing up in the broker log like:
> [2015-06-02 02:33:38,047] INFO Rolled new log segment for 
> 'receiver-93b40462-3850-47c1-bcda-8a3e221328ca-50' in 1 ms. (kafka.log.Log)
> [2015-06-02 02:33:38,049] WARN [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis)
> java.lang.ArrayIndexOutOfBoundsException: 3
> at kafka.server.KafkaApis.fetchOffsetsBefore(KafkaApis.scala:469)
> at kafka.server.KafkaApis.fetchOffsets(KafkaApis.scala:449)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:411)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:402)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.server.KafkaApis.handleOffsetRequest(KafkaApis.scala:402)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
> at java.lang.Thread.run(Thread.java:745)
> quoting Guozhang Wang's reply to my query on the users list:
> "I check the 0.8.2 code and may probably find a bug related to your issue.
> Basically, segsArray.last.size is called multiple times during handling
> offset requests, while segsArray.last could get concurrent appends. Hence
> it is possible that in line 461, if(segsArray.last.size > 0) returns false
> while later in line 468, if(segsArray.last.size > 0) could return true."
> http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAHwHRrUK-3wdoEAaFbsD0E859Ea0gXixfxgDzF8E3%3D_8r7K%2Bpw%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Modify checkstyle to allow import class...

2016-05-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1317


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-725) Broker Exception: Attempt to read with a maximum offset less than start offset

2016-05-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271051#comment-15271051
 ] 

Jiangjie Qin commented on KAFKA-725:


[~junrao] I asked this question in the RB. It seems the consumer in this case 
is not the Java consumer. Theoretically a java consumer can only fetch beyond 
HW when unclean leader election occurs.

> Broker Exception: Attempt to read with a maximum offset less than start offset
> --
>
> Key: KAFKA-725
> URL: https://issues.apache.org/jira/browse/KAFKA-725
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>Assignee: Stig Rohde Døssing
> Fix For: 0.10.0.0
>
>
> I have a simple consumer that's reading from a single topic/partition pair. 
> Running it seems to trigger these messages on the broker periodically:
> 2013/01/22 23:04:54.936 ERROR [KafkaApis] [kafka-request-handler-4] [kafka] 
> []  [KafkaApi-466] error when processing request (MyTopic,4,7951732,2097152)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (7951715) less than the start offset (7951732).
> at kafka.log.LogSegment.read(LogSegment.scala:105)
> at kafka.log.Log.read(Log.scala:390)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:372)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:330)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:326)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.map(Map.scala:93)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:326)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:165)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:164)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at 
> kafka.server.KafkaApis.maybeUnblockDelayedFetchRequests(KafkaApis.scala:164)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:186)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:185)
> at scala.collection.immutable.Map$Map2.foreach(Map.scala:127)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:185)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> at java.lang.Thread.run(Thread.java:619)
> When I shut the consumer down, I don't see the exceptions anymore.
> This is the code that my consumer is running:
>   while(true) {
> // we believe the consumer to be connected, so try and use it for 
> a fetch request
> val request = new FetchRequestBuilder()
>   .addFetch(topic, partition, nextOffset, fetchSize)
>   .maxWait(Int.MaxValue)
>   // TODO for super high-throughput, might be worth waiting for 
> more bytes
>   .minBytes(1)
>   .build
> debug("Fetching messages for stream %s and offset %s." format 
> (streamPartition, nextOffset))
> val messages = connectedConsumer.fetch(request)
> debug("Fetch complete for stream %s and offset %s. Got messages: 
> %s" format (streamPartition, nextOffset, messages))
> if (messages.hasError) {
>   warn("Got error code from broker for %s: %s. Shutting down 
> consumer to trigger a reconnect." format (streamPartition, 
> messages.errorCode(topic, partition)))
>   ErrorMapping.maybeThrowException(messages.errorCode(topic, 
> partition))
> }
> messages.messageSet(topic, partition).foreach(msg => {
>   watchers.foreach(_.onMessagesReady(msg.offset.toString, 
> msg.message.payload))
>   nextOffset = msg.nextOffset
> })
>   }
> Any idea what might be causing this error?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3646) Console producer using new producer should set timestamp

2016-05-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3646.
--
Resolution: Not A Bug

This is not an issue as {{KafkaProducer}} already set timestamp as current 
wall-clock time.

> Console producer using new producer should set timestamp
> 
>
> Key: KAFKA-3646
> URL: https://issues.apache.org/jira/browse/KAFKA-3646
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> {{kafka.tools.ConsoleProducer}}'s default usage of the new producer does not 
> explicitly set the timestamp, and hence for default timestamp type settings 
> (CreationTime) the returned timestamp from broker would be set to -1.
> We need to consider whether or not it makes sense to set the timestamp in 
> console producer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3656) Avoid stressing system more when already under stress

2016-05-04 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei updated KAFKA-3656:
--
Status: Patch Available  (was: In Progress)

> Avoid stressing system more when already under stress
> -
>
> Key: KAFKA-3656
> URL: https://issues.apache.org/jira/browse/KAFKA-3656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Alexey Raga
>Assignee: Liquan Pei
>
> I am working with Kafka Connect now and I am having error messages like that:
> {code}
> [2016-05-04 03:11:28,226] ERROR Failed to flush 
> WorkerSourceTask{id=geo-connector-0}, timed out while waiting for producer to 
> flush outstanding messages, 151860 left ([FAILED toString()]) 
> (org.apache.kafka.connect.runtime.WorkerSourceTask:237)
> [2016-05-04 03:11:28,227] ERROR Failed to commit offsets for 
> WorkerSourceTask{id=geo-connector-0} 
> (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
> {code}
> I didn't figure out the reason why Connect would pull so many records into 
> memory when it clearly can't produce that fast and I don't yet know why 
> producing messages is slow.
> But the part of {{151860 left ([FAILED toString()]) }} is interesting and I 
> looked at the code and found this:
> {code}
> if (timeoutMs <= 0) {
> log.error(
> "Failed to flush {}, timed out while waiting 
> for producer to flush outstanding "
> + "messages, {} left ({})", this, 
> outstandingMessages.size(), outstandingMessages);
> finishFailedFlush();
> return false;
> }
> {code}
> So when the connector is under stress and, assuming {{151860}} messages, 
> under a heavy memory pressure the code choses to take pretty much {{4 * 
> 151860}} byte arrays and to convert it to a java string.
> This not only eats more memory and adds to GC, but is also useless for 
> logging because the actual string, if it wouldn't fail, would look like:
> {code}
> (topic=lamington--geo-connector, partition=null, key=null, 
> value=[B@62c66f62=ProducerRecord(topic=lamington--geo-connector, 
> partition=null, key=null, value=[B@62c66f62, 
> ProducerRecord(topic=lamington--geo-connector, partition=null, key=null, .
> {code}
> I think it is a bug and a string representation of the outstanding messages 
> should be removed from the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3656) Avoid stressing system more when already under stress

2016-05-04 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3656 started by Liquan Pei.
-
> Avoid stressing system more when already under stress
> -
>
> Key: KAFKA-3656
> URL: https://issues.apache.org/jira/browse/KAFKA-3656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Alexey Raga
>Assignee: Liquan Pei
>
> I am working with Kafka Connect now and I am having error messages like that:
> {code}
> [2016-05-04 03:11:28,226] ERROR Failed to flush 
> WorkerSourceTask{id=geo-connector-0}, timed out while waiting for producer to 
> flush outstanding messages, 151860 left ([FAILED toString()]) 
> (org.apache.kafka.connect.runtime.WorkerSourceTask:237)
> [2016-05-04 03:11:28,227] ERROR Failed to commit offsets for 
> WorkerSourceTask{id=geo-connector-0} 
> (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
> {code}
> I didn't figure out the reason why Connect would pull so many records into 
> memory when it clearly can't produce that fast and I don't yet know why 
> producing messages is slow.
> But the part of {{151860 left ([FAILED toString()]) }} is interesting and I 
> looked at the code and found this:
> {code}
> if (timeoutMs <= 0) {
> log.error(
> "Failed to flush {}, timed out while waiting 
> for producer to flush outstanding "
> + "messages, {} left ({})", this, 
> outstandingMessages.size(), outstandingMessages);
> finishFailedFlush();
> return false;
> }
> {code}
> So when the connector is under stress and, assuming {{151860}} messages, 
> under a heavy memory pressure the code choses to take pretty much {{4 * 
> 151860}} byte arrays and to convert it to a java string.
> This not only eats more memory and adds to GC, but is also useless for 
> logging because the actual string, if it wouldn't fail, would look like:
> {code}
> (topic=lamington--geo-connector, partition=null, key=null, 
> value=[B@62c66f62=ProducerRecord(topic=lamington--geo-connector, 
> partition=null, key=null, value=[B@62c66f62, 
> ProducerRecord(topic=lamington--geo-connector, partition=null, key=null, .
> {code}
> I think it is a bug and a string representation of the outstanding messages 
> should be removed from the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA 3656: Remove logging outstanding message...

2016-05-04 Thread Ishiihara
GitHub user Ishiihara opened a pull request:

https://github.com/apache/kafka/pull/1319

KAFKA 3656: Remove logging outstanding messages when producer flush fails 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Ishiihara/kafka kafka-3656

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1319.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1319


commit 77af8cf2a7fbb0ecb58b63715ee1926655d96103
Author: Liquan Pei 
Date:   2016-05-04T17:08:37Z

Removing logging outstanding messages




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3657) NewProducer NullPointerException on ProduceRequest

2016-05-04 Thread Vamsi Subhash Achanta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15271007#comment-15271007
 ] 

Vamsi Subhash Achanta commented on KAFKA-3657:
--

I haven't tested it against 0.9.x branch. This occurs on production 
intermittently and we are still on 0.8.2.1

> NewProducer NullPointerException on ProduceRequest
> --
>
> Key: KAFKA-3657
> URL: https://issues.apache.org/jira/browse/KAFKA-3657
> Project: Kafka
>  Issue Type: Bug
>  Components: network, producer 
>Affects Versions: 0.8.2.1
> Environment: linux 3.2.0 debian7
>Reporter: Vamsi Subhash Achanta
>Assignee: Jun Rao
>
> The producer upon send.get() on the future appends to the accumulator the 
> record batches and the Sender.java (separate thread) flushes it to the 
> server. The produce request waits on the countDownLatch in the 
> FutureRecordMetadata:
> public RecordMetadata get() throws InterruptedException, 
> ExecutionException {
> this.result.await();
> In this case, the client thread is blocked for ever (as it is get() without 
> timeout) for the response and the response upon poll by the Sender returns an 
> attachment with the batch value as null. The batch is processed and the 
> request is errored out. The Sender catches a global level exception and then 
> goes ahead. As the accumulator is drained, the response will never be 
> returned and the producer client thread calling get() is blocked for ever on 
> the latch await call.
> I checked at the server end but still haven't found the reason for null 
> batch. Any pointers on this?
> ERROR [2016-05-01 21:00:09,256] [kafka-producer-network-thread |producer-app] 
> [Sender] message_id: group_id: : Uncaught error in kafka producer I/O thread:
> ! java.lang.NullPointerException: null
> ! at 
> org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:266)
> ! at 
> org.apache.kafka.clients.producer.internals.Sender.handleResponse(Sender.java:236)
> ! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:196)
> ! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> ! at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3649) Add capability to query broker process for configuration properties

2016-05-04 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270996#comment-15270996
 ] 

Grant Henke commented on KAFKA-3649:


[~liquanpei] I see you assigned yourself to this. I think this should be 
addressed after KIP-4 add the Describe/Alter config requests to the broker. I 
need to add the proposed protocol format to the wiki yet, but see here for 
details: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-PublicInterfaces

> Add capability to query broker process for configuration properties
> ---
>
> Key: KAFKA-3649
> URL: https://issues.apache.org/jira/browse/KAFKA-3649
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, config, core
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: David Tucker
>Assignee: Liquan Pei
>
> Developing an API by which running brokers could be queries for the various 
> configuration settings is an important feature to managing the Kafka cluster.
> Long term, the API could be enhanced to allow updates for those properties 
> that could be changed at run time ... but this involves a more thorough 
> evaluation of configuration properties (which once can be modified in a 
> running broker and which require a restart {of individual nodes or the entire 
> cluster}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3649) Add capability to query broker process for configuration properties

2016-05-04 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3649:
-

Assignee: Liquan Pei

> Add capability to query broker process for configuration properties
> ---
>
> Key: KAFKA-3649
> URL: https://issues.apache.org/jira/browse/KAFKA-3649
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, config, core
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: David Tucker
>Assignee: Liquan Pei
>
> Developing an API by which running brokers could be queries for the various 
> configuration settings is an important feature to managing the Kafka cluster.
> Long term, the API could be enhanced to allow updates for those properties 
> that could be changed at run time ... but this involves a more thorough 
> evaluation of configuration properties (which once can be modified in a 
> running broker and which require a restart {of individual nodes or the entire 
> cluster}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3656) Avoid stressing system more when already under stress

2016-05-04 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3656:
-

Assignee: Liquan Pei

> Avoid stressing system more when already under stress
> -
>
> Key: KAFKA-3656
> URL: https://issues.apache.org/jira/browse/KAFKA-3656
> Project: Kafka
>  Issue Type: Bug
>Reporter: Alexey Raga
>Assignee: Liquan Pei
>
> I am working with Kafka Connect now and I am having error messages like that:
> {code}
> [2016-05-04 03:11:28,226] ERROR Failed to flush 
> WorkerSourceTask{id=geo-connector-0}, timed out while waiting for producer to 
> flush outstanding messages, 151860 left ([FAILED toString()]) 
> (org.apache.kafka.connect.runtime.WorkerSourceTask:237)
> [2016-05-04 03:11:28,227] ERROR Failed to commit offsets for 
> WorkerSourceTask{id=geo-connector-0} 
> (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
> {code}
> I didn't figure out the reason why Connect would pull so many records into 
> memory when it clearly can't produce that fast and I don't yet know why 
> producing messages is slow.
> But the part of {{151860 left ([FAILED toString()]) }} is interesting and I 
> looked at the code and found this:
> {code}
> if (timeoutMs <= 0) {
> log.error(
> "Failed to flush {}, timed out while waiting 
> for producer to flush outstanding "
> + "messages, {} left ({})", this, 
> outstandingMessages.size(), outstandingMessages);
> finishFailedFlush();
> return false;
> }
> {code}
> So when the connector is under stress and, assuming {{151860}} messages, 
> under a heavy memory pressure the code choses to take pretty much {{4 * 
> 151860}} byte arrays and to convert it to a java string.
> This not only eats more memory and adds to GC, but is also useless for 
> logging because the actual string, if it wouldn't fail, would look like:
> {code}
> (topic=lamington--geo-connector, partition=null, key=null, 
> value=[B@62c66f62=ProducerRecord(topic=lamington--geo-connector, 
> partition=null, key=null, value=[B@62c66f62, 
> ProducerRecord(topic=lamington--geo-connector, partition=null, key=null, .
> {code}
> I think it is a bug and a string representation of the outstanding messages 
> should be removed from the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [jira] [Updated] (KAFKA-3160) Kafka LZ4 framing code miscalculates header checksum

2016-05-04 Thread Dana Powers
You can assign to me also


Re: [DISCUSS] KIP-48 Support for delegation tokens as an authentication mechanism

2016-05-04 Thread parth brahmbhatt
Thanks for reviewing Gwen. The wiki already has details on token expiration
under token acquisition process
.
Current proposal is that tokens will expire based on a server side
configuration (default 24 hours) unless renewed. Renewal is only allowed
until the max life time of token. Alternatively we could also make that an
optional param and the server side default can serve as the upper bound.

To your second point it will be done exactly the same way we would support
multiple keytabs. The calling client will have to put the tokens it wants
to use in the subject instance and call produce/consume inside
subject.doas. Each caller will have to keep track of its own subject. I
will have to look at the code to see if we support this feature right now
but my understanding is delegation token shouldn't need any special
treatment as its just another type of Credential in the subject.

I would also like to know what is your opinion about infinite renewal (my
recommendation is to not support this), tokens renewing them self(my
recommendation is to not support this) and most importantly your choice
between the alternatives listed on this thread

( I am leaning towards the alternative-2 minus controller distributing
secret). Thanks again for reviewing.

Thanks
Parth



On Wed, May 4, 2016 at 6:17 AM, Gwen Shapira  wrote:

> Harsha,
>
> I was thinking of the Rest Proxy. I didn't see your design yet, but in
> our proxy, we have a set of producers, which will serve multiple users
> going through the proxy. Since these users will have different
> privileges, they'll need to authenticate separately, and can't share a
> token.
>
> Am I missing anything?
>
> Gwen
>
> On Tue, May 3, 2016 at 2:11 PM, Harsha  wrote:
> > Gwen,
> >On your second point. Can you describe a usecase where
> >mutliple clients ended up sharing a producer and even if they
> >do why can't they not use single token that producer
> >captures. Why would we need multiple clients with different
> >tokens sharing a single instance of producer.  Also in this
> >case other clients have access all the tokens no?
> >
> > Thanks,
> > Harsha
> >
> >
> > On Tue, May 3, 2016, at 11:49 AM, Gwen Shapira wrote:
> >> Sorry for the delay:
> >>
> >> Two questions that we didn't see in the wiki:
> >> 1. Is there an expiration for delegation tokens? Renewal? How do we
> >> revoke them?
> >> 2. If we want to use delegation tokens for "do-as" (say, submit Storm
> >> job as my user), we will need a producer for every job (we can't share
> >> them between multiple jobs running on same node), since we only
> >> authenticate when connecting. Is there a plan to change this for
> >> delegation tokens, in order to allow multiple users with different
> >> tokens to share a client?
> >>
> >> Gwen
> >>
> >> On Tue, May 3, 2016 at 9:12 AM, parth brahmbhatt
> >>  wrote:
> >> > Bumping this up one more time, can other committers review?
> >> >
> >> > Thanks
> >> > Parth
> >> >
> >> > On Tue, Apr 26, 2016 at 9:07 AM, Harsha  wrote:
> >> >
> >> >> Parth,
> >> >>   Overall current design looks good to me. I am +1 on the
> KIP.
> >> >>
> >> >> Gwen , Jun can you review this as well.
> >> >>
> >> >> -Harsha
> >> >>
> >> >> On Tue, Apr 19, 2016, at 09:57 AM, parth brahmbhatt wrote:
> >> >> > Thanks for review Jitendra.
> >> >> >
> >> >> > I don't like the idea of infinite lifetime but I see the Streaming
> use
> >> >> > case. Even for Streaming use case I was hoping there will be some
> notion
> >> >> > of
> >> >> > master/driver that can get new delegation tokens at fixed interval
> and
> >> >> > distribute to workers. If that is not the case for we can discuss
> >> >> > delegation tokens renewing them self and the security implications
> of the
> >> >> > same.
> >> >> >
> >> >> > I did not want clients to fetch tokens from zookeeper, overall I
> think
> >> >> > its
> >> >> > better if clients don't rely on our metadata store and I think we
> are
> >> >> > moving in that direction with all the KIP-4 improvements.  I chose
> >> >> > zookeeper as in this case the client will still just talk to
> broker , its
> >> >> > the brokers that will use zookeeper which we already do for a lot
> of
> >> >> > other
> >> >> > usecases + ease of development + and the ability so tokens will
> survive
> >> >> > even a rolling restart/cluster failure. if a majority agrees the
> added
> >> >> > complexity to have controller forwarding keys to all broker is
> justified
> >> >> > as
> >> >> > it provides tighter security , I am fine with that option too.
> >> >> >
> >> >> > Given zookeeper 

[jira] [Updated] (KAFKA-3525) max.reserved.broker.id off-by-one error

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3525:
---
Priority: Major  (was: Blocker)

> max.reserved.broker.id off-by-one error
> ---
>
> Key: KAFKA-3525
> URL: https://issues.apache.org/jira/browse/KAFKA-3525
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Alan Braithwaite
>Assignee: Manikumar Reddy
> Fix For: 0.10.0.0
>
>
> There's an off-by-one error in the config check / id generation for 
> max.reserved.broker.id setting.  The auto-generation will generate 
> max.reserved.broker.id as the initial broker id as it's currently written.
> Not sure what the consequences of this are if there's already a broker with 
> that id as I didn't test that behavior.
> This can return 0 + max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/utils/ZkUtils.scala#L213-L215
> However, this does a <= check, which is inclusive of max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/server/KafkaConfig.scala#L984-L986



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3160) Kafka LZ4 framing code miscalculates header checksum

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3160:
---
Fix Version/s: 0.10.0.0

> Kafka LZ4 framing code miscalculates header checksum
> 
>
> Key: KAFKA-3160
> URL: https://issues.apache.org/jira/browse/KAFKA-3160
> Project: Kafka
>  Issue Type: Bug
>  Components: compression
>Affects Versions: 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.8.2.2, 0.9.0.1
>Reporter: Dana Powers
>Assignee: Magnus Edenhill
>  Labels: compatibility, compression, lz4
> Fix For: 0.10.0.0
>
>
> KAFKA-1493 partially implements the LZ4 framing specification, but it 
> incorrectly calculates the header checksum. This causes 
> KafkaLZ4BlockInputStream to raise an error 
> [IOException(DESCRIPTOR_HASH_MISMATCH)] if a client sends *correctly* framed 
> LZ4 data. It also causes KafkaLZ4BlockOutputStream to generate incorrectly 
> framed LZ4 data, which means clients decoding LZ4 messages from kafka will 
> always receive incorrectly framed data.
> Specifically, the current implementation includes the 4-byte MagicNumber in 
> the checksum, which is incorrect.
> http://cyan4973.github.io/lz4/lz4_Frame_format.html
> Third-party clients that attempt to use off-the-shelf lz4 framing find that 
> brokers reject messages as having a corrupt checksum. So currently non-java 
> clients must 'fixup' lz4 packets to deal with the broken checksum.
> Magnus first identified this issue in librdkafka; kafka-python has the same 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3160) Kafka LZ4 framing code miscalculates header checksum

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3160:
---
Priority: Critical  (was: Major)

> Kafka LZ4 framing code miscalculates header checksum
> 
>
> Key: KAFKA-3160
> URL: https://issues.apache.org/jira/browse/KAFKA-3160
> Project: Kafka
>  Issue Type: Bug
>  Components: compression
>Affects Versions: 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.8.2.2, 0.9.0.1
>Reporter: Dana Powers
>Assignee: Magnus Edenhill
>Priority: Critical
>  Labels: compatibility, compression, lz4
> Fix For: 0.10.0.0
>
>
> KAFKA-1493 partially implements the LZ4 framing specification, but it 
> incorrectly calculates the header checksum. This causes 
> KafkaLZ4BlockInputStream to raise an error 
> [IOException(DESCRIPTOR_HASH_MISMATCH)] if a client sends *correctly* framed 
> LZ4 data. It also causes KafkaLZ4BlockOutputStream to generate incorrectly 
> framed LZ4 data, which means clients decoding LZ4 messages from kafka will 
> always receive incorrectly framed data.
> Specifically, the current implementation includes the 4-byte MagicNumber in 
> the checksum, which is incorrect.
> http://cyan4973.github.io/lz4/lz4_Frame_format.html
> Third-party clients that attempt to use off-the-shelf lz4 framing find that 
> brokers reject messages as having a corrupt checksum. So currently non-java 
> clients must 'fixup' lz4 packets to deal with the broken checksum.
> Magnus first identified this issue in librdkafka; kafka-python has the same 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3652) Return error response for unsupported version of ApiVersionsRequest

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3652:
---
Priority: Blocker  (was: Major)

> Return error response for unsupported version of ApiVersionsRequest
> ---
>
> Key: KAFKA-3652
> URL: https://issues.apache.org/jira/browse/KAFKA-3652
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Discussion is in the PR https://github.com/apache/kafka/pull/1286. 
> Dont fail authentication (for SASL) or break connections (normal operation) 
> when an unsupported version of ApiVersionsRequest is received. Instead return 
> error response so that client can retry with earlier version of request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1246

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-2684; Add force option to topic / config command so they can be

--
[...truncated 2475 lines...]

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslSslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslSslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslSslConsumerTest > testListTopics PASSED

kafka.api.SaslSslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslSslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslSslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslSslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SslConsumerTest > testListTopics PASSED

kafka.api.SslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.SaslPlaintextConsumerTest > testPauseStateNotPreservedByRebalance 
PASSED

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.SaslPlainPlaintextConsumerTest > 
testPauseStateNotPreservedByRebalance PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testPartitionReassignmentCallback 
PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED


[jira] [Commented] (KAFKA-3657) NewProducer NullPointerException on ProduceRequest

2016-05-04 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270823#comment-15270823
 ] 

Ismael Juma commented on KAFKA-3657:


Thanks for the bug report. Do you know if this happens with 0.9.0.1 or trunk?

> NewProducer NullPointerException on ProduceRequest
> --
>
> Key: KAFKA-3657
> URL: https://issues.apache.org/jira/browse/KAFKA-3657
> Project: Kafka
>  Issue Type: Bug
>  Components: network, producer 
>Affects Versions: 0.8.2.1
> Environment: linux 3.2.0 debian7
>Reporter: Vamsi Subhash Achanta
>Assignee: Jun Rao
>
> The producer upon send.get() on the future appends to the accumulator the 
> record batches and the Sender.java (separate thread) flushes it to the 
> server. The produce request waits on the countDownLatch in the 
> FutureRecordMetadata:
> public RecordMetadata get() throws InterruptedException, 
> ExecutionException {
> this.result.await();
> In this case, the client thread is blocked for ever (as it is get() without 
> timeout) for the response and the response upon poll by the Sender returns an 
> attachment with the batch value as null. The batch is processed and the 
> request is errored out. The Sender catches a global level exception and then 
> goes ahead. As the accumulator is drained, the response will never be 
> returned and the producer client thread calling get() is blocked for ever on 
> the latch await call.
> I checked at the server end but still haven't found the reason for null 
> batch. Any pointers on this?
> ERROR [2016-05-01 21:00:09,256] [kafka-producer-network-thread |producer-app] 
> [Sender] message_id: group_id: : Uncaught error in kafka producer I/O thread:
> ! java.lang.NullPointerException: null
> ! at 
> org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:266)
> ! at 
> org.apache.kafka.clients.producer.internals.Sender.handleResponse(Sender.java:236)
> ! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:196)
> ! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> ! at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-725) Broker Exception: Attempt to read with a maximum offset less than start offset

2016-05-04 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270789#comment-15270789
 ] 

Jun Rao commented on KAFKA-725:
---

[~Srdo], thanks for the patch. It's still not very clear to me how a consumer 
can trigger the IllegalArgumentException even without the patch. The broker 
only returns messages up to the high watermark (HW) to the consumer. So, the 
offset from a consumer should always be <= HW. The problem can only occur if a 
consumer uses an offset > HW, but <= the log end offset, which should never 
happen in a normal consumer.

> Broker Exception: Attempt to read with a maximum offset less than start offset
> --
>
> Key: KAFKA-725
> URL: https://issues.apache.org/jira/browse/KAFKA-725
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>Assignee: Stig Rohde Døssing
> Fix For: 0.10.0.0
>
>
> I have a simple consumer that's reading from a single topic/partition pair. 
> Running it seems to trigger these messages on the broker periodically:
> 2013/01/22 23:04:54.936 ERROR [KafkaApis] [kafka-request-handler-4] [kafka] 
> []  [KafkaApi-466] error when processing request (MyTopic,4,7951732,2097152)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (7951715) less than the start offset (7951732).
> at kafka.log.LogSegment.read(LogSegment.scala:105)
> at kafka.log.Log.read(Log.scala:390)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:372)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:330)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:326)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.map(Map.scala:93)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:326)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:165)
> at 
> kafka.server.KafkaApis$$anonfun$maybeUnblockDelayedFetchRequests$2.apply(KafkaApis.scala:164)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at 
> kafka.server.KafkaApis.maybeUnblockDelayedFetchRequests(KafkaApis.scala:164)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:186)
> at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$2.apply(KafkaApis.scala:185)
> at scala.collection.immutable.Map$Map2.foreach(Map.scala:127)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:185)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:41)
> at java.lang.Thread.run(Thread.java:619)
> When I shut the consumer down, I don't see the exceptions anymore.
> This is the code that my consumer is running:
>   while(true) {
> // we believe the consumer to be connected, so try and use it for 
> a fetch request
> val request = new FetchRequestBuilder()
>   .addFetch(topic, partition, nextOffset, fetchSize)
>   .maxWait(Int.MaxValue)
>   // TODO for super high-throughput, might be worth waiting for 
> more bytes
>   .minBytes(1)
>   .build
> debug("Fetching messages for stream %s and offset %s." format 
> (streamPartition, nextOffset))
> val messages = connectedConsumer.fetch(request)
> debug("Fetch complete for stream %s and offset %s. Got messages: 
> %s" format (streamPartition, nextOffset, messages))
> if (messages.hasError) {
>   warn("Got error code from broker for %s: %s. Shutting down 
> consumer to trigger a reconnect." format (streamPartition, 
> messages.errorCode(topic, partition)))
>   ErrorMapping.maybeThrowException(messages.errorCode(topic, 
> partition))
> }
> messages.messageSet(topic, partition).foreach(msg => {
>   watchers.foreach(_.onMessagesReady(msg.offset.toString, 
> msg.message.payload))
>   nextOffset = msg.nextOffset
> })
>   }
> Any idea 

[jira] [Created] (KAFKA-3657) NewProducer NullPointerException on ProduceRequest

2016-05-04 Thread Vamsi Subhash Achanta (JIRA)
Vamsi Subhash Achanta created KAFKA-3657:


 Summary: NewProducer NullPointerException on ProduceRequest
 Key: KAFKA-3657
 URL: https://issues.apache.org/jira/browse/KAFKA-3657
 Project: Kafka
  Issue Type: Bug
  Components: network, producer 
Affects Versions: 0.8.2.1
 Environment: linux 3.2.0 debian7
Reporter: Vamsi Subhash Achanta
Assignee: Jun Rao


The producer upon send.get() on the future appends to the accumulator the 
record batches and the Sender.java (separate thread) flushes it to the server. 
The produce request waits on the countDownLatch in the FutureRecordMetadata:
public RecordMetadata get() throws InterruptedException, ExecutionException 
{
this.result.await();

In this case, the client thread is blocked for ever (as it is get() without 
timeout) for the response and the response upon poll by the Sender returns an 
attachment with the batch value as null. The batch is processed and the request 
is errored out. The Sender catches a global level exception and then goes 
ahead. As the accumulator is drained, the response will never be returned and 
the producer client thread calling get() is blocked for ever on the latch await 
call.

I checked at the server end but still haven't found the reason for null batch. 
Any pointers on this?

ERROR [2016-05-01 21:00:09,256] [kafka-producer-network-thread |producer-app] 
[Sender] message_id: group_id: : Uncaught error in kafka producer I/O thread:
! java.lang.NullPointerException: null
! at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:266)
! at 
org.apache.kafka.clients.producer.internals.Sender.handleResponse(Sender.java:236)
! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:196)
! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
! at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3657) NewProducer NullPointerException on ProduceRequest

2016-05-04 Thread Vamsi Subhash Achanta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vamsi Subhash Achanta updated KAFKA-3657:
-
Description: 
The producer upon send.get() on the future appends to the accumulator the 
record batches and the Sender.java (separate thread) flushes it to the server. 
The produce request waits on the countDownLatch in the FutureRecordMetadata:
public RecordMetadata get() throws InterruptedException, ExecutionException 
{
this.result.await();

In this case, the client thread is blocked for ever (as it is get() without 
timeout) for the response and the response upon poll by the Sender returns an 
attachment with the batch value as null. The batch is processed and the request 
is errored out. The Sender catches a global level exception and then goes 
ahead. As the accumulator is drained, the response will never be returned and 
the producer client thread calling get() is blocked for ever on the latch await 
call.

I checked at the server end but still haven't found the reason for null batch. 
Any pointers on this?


ERROR [2016-05-01 21:00:09,256] [kafka-producer-network-thread |producer-app] 
[Sender] message_id: group_id: : Uncaught error in kafka producer I/O thread:
! java.lang.NullPointerException: null
! at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:266)
! at 
org.apache.kafka.clients.producer.internals.Sender.handleResponse(Sender.java:236)
! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:196)
! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
! at java.lang.Thread.run(Thread.java:745)

  was:
The producer upon send.get() on the future appends to the accumulator the 
record batches and the Sender.java (separate thread) flushes it to the server. 
The produce request waits on the countDownLatch in the FutureRecordMetadata:
public RecordMetadata get() throws InterruptedException, ExecutionException 
{
this.result.await();

In this case, the client thread is blocked for ever (as it is get() without 
timeout) for the response and the response upon poll by the Sender returns an 
attachment with the batch value as null. The batch is processed and the request 
is errored out. The Sender catches a global level exception and then goes 
ahead. As the accumulator is drained, the response will never be returned and 
the producer client thread calling get() is blocked for ever on the latch await 
call.

I checked at the server end but still haven't found the reason for null batch. 
Any pointers on this?

ERROR [2016-05-01 21:00:09,256] [kafka-producer-network-thread |producer-app] 
[Sender] message_id: group_id: : Uncaught error in kafka producer I/O thread:
! java.lang.NullPointerException: null
! at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:266)
! at 
org.apache.kafka.clients.producer.internals.Sender.handleResponse(Sender.java:236)
! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:196)
! at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
! at java.lang.Thread.run(Thread.java:745)


> NewProducer NullPointerException on ProduceRequest
> --
>
> Key: KAFKA-3657
> URL: https://issues.apache.org/jira/browse/KAFKA-3657
> Project: Kafka
>  Issue Type: Bug
>  Components: network, producer 
>Affects Versions: 0.8.2.1
> Environment: linux 3.2.0 debian7
>Reporter: Vamsi Subhash Achanta
>Assignee: Jun Rao
>
> The producer upon send.get() on the future appends to the accumulator the 
> record batches and the Sender.java (separate thread) flushes it to the 
> server. The produce request waits on the countDownLatch in the 
> FutureRecordMetadata:
> public RecordMetadata get() throws InterruptedException, 
> ExecutionException {
> this.result.await();
> In this case, the client thread is blocked for ever (as it is get() without 
> timeout) for the response and the response upon poll by the Sender returns an 
> attachment with the batch value as null. The batch is processed and the 
> request is errored out. The Sender catches a global level exception and then 
> goes ahead. As the accumulator is drained, the response will never be 
> returned and the producer client thread calling get() is blocked for ever on 
> the latch await call.
> I checked at the server end but still haven't found the reason for null 
> batch. Any pointers on this?
> ERROR [2016-05-01 21:00:09,256] [kafka-producer-network-thread |producer-app] 
> [Sender] message_id: group_id: : Uncaught error in kafka producer I/O thread:
> ! java.lang.NullPointerException: null
> ! at 
> org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:266)
> ! at 
> 

Build failed in Jenkins: kafka-trunk-jdk8 #584

2016-05-04 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-2684; Add force option to topic / config command so they can be

--
[...truncated 1643 lines...]

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 

How to write Connect logs to file?

2016-05-04 Thread Anish Mashankar
How to write Kafka Connect logs to a rolling file? It runs in a distributed
environment. Adding a log4j.properties file doesn't seem to work. What are
other ways I could do the same?

-- 
Anish Samir Mashankar


Fwd: How to build kafka jar with all dependencies?

2016-05-04 Thread ravi singh
Hello All,
I used .*/gradlew jarAll* but still scala
​dependent ​
libs are missing from the jar?
​It should be something ​very simple which I might be missing. Please let
me know if anyone knows.


-- 
*Regards,*
*Ravi*


[jira] [Updated] (KAFKA-2684) Add force option to TopicCommand & ConfigCommand to suppress console prompts

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2684:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 351
[https://github.com/apache/kafka/pull/351]

> Add force option to TopicCommand & ConfigCommand to suppress console prompts
> 
>
> Key: KAFKA-2684
> URL: https://issues.apache.org/jira/browse/KAFKA-2684
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Trivial
> Fix For: 0.10.1.0
>
>
> Add force option to TopicCommand & Config Command to suppress console prompts
> This is useful from system tests etc which call these scripts 
> programmatically. 
> Relates to KAFKA-2338



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3652) Return error response for unsupported version of ApiVersionsRequest

2016-05-04 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3652:
--
Description: 
Discussion is in the PR https://github.com/apache/kafka/pull/1286. 
Dont fail authentication (for SASL) or break connections (normal operation) 
when an unsupported version of ApiVersionsRequest is received. Instead return 
error response so that client can retry with earlier version of request.

  was:The current implementation allows multiple ApiVersionsRequests prior to 
SaslHandshakeRequest. Restrict to only one. Discussion is in the PR 
https://github.com/apache/kafka/pull/1286. 

Summary: Return error response for unsupported version of 
ApiVersionsRequest  (was: Allow only one ApiVersionsRequest before SASL 
handshake)

> Return error response for unsupported version of ApiVersionsRequest
> ---
>
> Key: KAFKA-3652
> URL: https://issues.apache.org/jira/browse/KAFKA-3652
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> Discussion is in the PR https://github.com/apache/kafka/pull/1286. 
> Dont fail authentication (for SASL) or break connections (normal operation) 
> when an unsupported version of ApiVersionsRequest is received. Instead return 
> error response so that client can retry with earlier version of request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3647) Unable to set a ssl provider / only DSS ciphers available

2016-05-04 Thread Elvar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elvar updated KAFKA-3647:
-
Summary: Unable to set a ssl provider / only DSS ciphers available  (was: 
Unable to set a ssl provider)

> Unable to set a ssl provider / only DSS ciphers available
> -
>
> Key: KAFKA-3647
> URL: https://issues.apache.org/jira/browse/KAFKA-3647
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.1
> Environment: Centos, OracleJRE 8, Vagrant
>Reporter: Elvar
>
> When defining a ssl provider Kafka does not start because the provider was 
> not found.
> {code}
> [2016-05-02 13:48:48,252] FATAL [Kafka Server 11], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.KafkaException: 
> java.security.NoSuchProviderException: no such provider: sun.security.ec.SunEC
> at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:44)
> {code}
> To test
> {code}
> /bin/kafka-server-start /etc/kafka/server.properties --override 
> ssl.provider=sun.security.ec.SunEC
> {code}
> This is stopping us from talking to Kafka with SSL from Go programs because 
> no common cipher suites are available.
> Using sslscan this is available from Kafka
> {code}
>  Supported Server Cipher(s):
>Accepted  TLSv1  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLSv1  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLSv1  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS11  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS11  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS11  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA256
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS12  128 bits  DHE-DSS-AES128-GCM-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS12  128 bits  EDH-DSS-DES-CBC3-SHA
>  Preferred Server Cipher(s):
>SSLv2  0 bits(NONE)
>TLSv1  256 bits  DHE-DSS-AES256-SHA
>TLS11  256 bits  DHE-DSS-AES256-SHA
>TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
> {code}
> From the Golang documentation these are avilable there
> {code}
> TLS_RSA_WITH_RC4_128_SHAuint16 = 0x0005
> TLS_RSA_WITH_3DES_EDE_CBC_SHA   uint16 = 0x000a
> TLS_RSA_WITH_AES_128_CBC_SHAuint16 = 0x002f
> TLS_RSA_WITH_AES_256_CBC_SHAuint16 = 0x0035
> TLS_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009c
> TLS_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009d
> TLS_ECDHE_ECDSA_WITH_RC4_128_SHAuint16 = 0xc007
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHAuint16 = 0xc009
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHAuint16 = 0xc00a
> TLS_ECDHE_RSA_WITH_RC4_128_SHA  uint16 = 0xc011
> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xc012
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA  uint16 = 0xc013
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA  uint16 = 0xc014
> TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256   uint16 = 0xc02f
> TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xc02b
> TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384   uint16 = 0xc030
> TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xc02c
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3647) Unable to set a ssl provider

2016-05-04 Thread Elvar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270468#comment-15270468
 ] 

Elvar commented on KAFKA-3647:
--

Regarding the ssl provider in the Kafka config, has anyone gotten that to work?

> Unable to set a ssl provider
> 
>
> Key: KAFKA-3647
> URL: https://issues.apache.org/jira/browse/KAFKA-3647
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.1
> Environment: Centos, OracleJRE 8, Vagrant
>Reporter: Elvar
>
> When defining a ssl provider Kafka does not start because the provider was 
> not found.
> {code}
> [2016-05-02 13:48:48,252] FATAL [Kafka Server 11], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.KafkaException: 
> java.security.NoSuchProviderException: no such provider: sun.security.ec.SunEC
> at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:44)
> {code}
> To test
> {code}
> /bin/kafka-server-start /etc/kafka/server.properties --override 
> ssl.provider=sun.security.ec.SunEC
> {code}
> This is stopping us from talking to Kafka with SSL from Go programs because 
> no common cipher suites are available.
> Using sslscan this is available from Kafka
> {code}
>  Supported Server Cipher(s):
>Accepted  TLSv1  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLSv1  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLSv1  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS11  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS11  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS11  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA256
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS12  128 bits  DHE-DSS-AES128-GCM-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS12  128 bits  EDH-DSS-DES-CBC3-SHA
>  Preferred Server Cipher(s):
>SSLv2  0 bits(NONE)
>TLSv1  256 bits  DHE-DSS-AES256-SHA
>TLS11  256 bits  DHE-DSS-AES256-SHA
>TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
> {code}
> From the Golang documentation these are avilable there
> {code}
> TLS_RSA_WITH_RC4_128_SHAuint16 = 0x0005
> TLS_RSA_WITH_3DES_EDE_CBC_SHA   uint16 = 0x000a
> TLS_RSA_WITH_AES_128_CBC_SHAuint16 = 0x002f
> TLS_RSA_WITH_AES_256_CBC_SHAuint16 = 0x0035
> TLS_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009c
> TLS_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009d
> TLS_ECDHE_ECDSA_WITH_RC4_128_SHAuint16 = 0xc007
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHAuint16 = 0xc009
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHAuint16 = 0xc00a
> TLS_ECDHE_RSA_WITH_RC4_128_SHA  uint16 = 0xc011
> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xc012
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA  uint16 = 0xc013
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA  uint16 = 0xc014
> TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256   uint16 = 0xc02f
> TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xc02b
> TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384   uint16 = 0xc030
> TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xc02c
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3647) Unable to set a ssl provider

2016-05-04 Thread Elvar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270467#comment-15270467
 ] 

Elvar commented on KAFKA-3647:
--

Tried recreating the JKS's and this is how I did it

{code}
Generate the CA cert and key:
openssl req -new -x509 -keyout ca.key -out ca.cert -days 3650 -subj 
"/C=IS/ST=Reykjavik/L=Reykjavik/O=M/OU=Mon/CN=kafka.local" -nodes


Import CA cert to server truststore:
keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file 
ca.cert -storepass pass -noprompt
Import CA cert to client truststore:
keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file 
ca.cert -storepass pass -noprompt


Create server keystore and key:
keytool -keystore kafka.server.keystore.jks -alias confluent-1 -validity 3650 
-genkey -storepass pass -keypass pass -dname "CN=confluent-1, OU=Mon, O=M, 
L=Reykjavik, S=Reykjavik, C=IS"
Create server CSR:
keytool -keystore kafka.server.keystore.jks -alias confluent-1 -certreq -file 
server.csr -storepass pass
Sign server CSR with CA key:
openssl x509 -req -CA ca.cert -CAkey ca.key -in server.csr -out server.signed 
-days 3650 -CAcreateserial -passin pass:pass
Import CA to the server keystore:
keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca.cert 
-storepass pass -noprompt
Import signed server cert to server keystore:
keytool -keystore kafka.server.keystore.jks -alias confluent-1 -import -file 
server.signed -storepass pass -noprompt





Create client keystore and key:
keytool -keystore kafka.client.keystore.jks -alias workclient -validity 3650 
-genkey -storepass pass -keypass pass -dname "CN=workclient, OU=Mon, O=M, 
L=Reykjavik, S=Reykjavik, C=IS"
Create client CSR:
keytool -keystore kafka.client.keystore.jks -alias workclient -certreq -file 
client.csr -storepass pass
Sign client CSR with CA key:
openssl x509 -req -CA ca.cert -CAkey ca.key -in client.csr -out client.signed 
-days 3650 -CAcreateserial -passin pass:pass
Import CA cert to the client keystore:
keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca.cert 
-storepass pass -noprompt
Import signed client cert to client keystore:
keytool -keystore kafka.client.keystore.jks -alias workclient -import -file 
client.signed -storepass pass -noprompt
{code}

sslscan still reports only DSS cyphers.

Using groovy and a simple command I am able to extract avialble ciphers in 
detail

{code}
groovy:000> 
java.security.Security.providers.each{p->p.getServices().each{s->println s}}
{code}

Here is my output

{code:collapse=true}
SUN: SecureRandom.NativePRNG -> sun.security.provider.NativePRNG

SUN: SecureRandom.SHA1PRNG -> sun.security.provider.SecureRandom
  attributes: {ImplementedIn=Software}

SUN: Signature.SHA1withDSA -> sun.security.provider.DSA$SHA1withDSA
  aliases: [DSA, DSS, SHA/DSA, SHA-1/DSA, SHA1/DSA, SHAwithDSA, DSAWithSHA1, 
OID.1.2.840.10040.4.3, 1.2.840.10040.4.3, 1.3.14.3.2.13, 1.3.14.3.2.27]
  attributes: {ImplementedIn=Software, KeySize=1024, 
SupportedKeyClasses=java.security.interfaces.DSAPublicKey|java.security.interfaces.DSAPrivateKey}

SUN: Signature.NONEwithDSA -> sun.security.provider.DSA$RawDSA
  aliases: [RawDSA]
  attributes: {KeySize=1024, 
SupportedKeyClasses=java.security.interfaces.DSAPublicKey|java.security.interfaces.DSAPrivateKey}

SUN: Signature.SHA224withDSA -> sun.security.provider.DSA$SHA224withDSA
  aliases: [OID.2.16.840.1.101.3.4.3.1, 2.16.840.1.101.3.4.3.1]
  attributes: {KeySize=2048, 
SupportedKeyClasses=java.security.interfaces.DSAPublicKey|java.security.interfaces.DSAPrivateKey}

SUN: Signature.SHA256withDSA -> sun.security.provider.DSA$SHA256withDSA
  aliases: [OID.2.16.840.1.101.3.4.3.2, 2.16.840.1.101.3.4.3.2]
  attributes: {KeySize=2048, 
SupportedKeyClasses=java.security.interfaces.DSAPublicKey|java.security.interfaces.DSAPrivateKey}

SUN: KeyPairGenerator.DSA -> sun.security.provider.DSAKeyPairGenerator
  aliases: [OID.1.2.840.10040.4.1, 1.2.840.10040.4.1, 1.3.14.3.2.12]
  attributes: {ImplementedIn=Software, KeySize=2048}

SUN: MessageDigest.MD2 -> sun.security.provider.MD2

SUN: MessageDigest.MD5 -> sun.security.provider.MD5
  attributes: {ImplementedIn=Software}

SUN: MessageDigest.SHA -> sun.security.provider.SHA
  aliases: [SHA-1, SHA1, 1.3.14.3.2.26, OID.1.3.14.3.2.26]
  attributes: {ImplementedIn=Software}

SUN: MessageDigest.SHA-224 -> sun.security.provider.SHA2$SHA224
  aliases: [2.16.840.1.101.3.4.2.4, OID.2.16.840.1.101.3.4.2.4]

SUN: MessageDigest.SHA-256 -> sun.security.provider.SHA2$SHA256
  aliases: [2.16.840.1.101.3.4.2.1, OID.2.16.840.1.101.3.4.2.1]

SUN: MessageDigest.SHA-384 -> sun.security.provider.SHA5$SHA384
  aliases: [2.16.840.1.101.3.4.2.2, OID.2.16.840.1.101.3.4.2.2]

SUN: MessageDigest.SHA-512 -> sun.security.provider.SHA5$SHA512
  aliases: [2.16.840.1.101.3.4.2.3, OID.2.16.840.1.101.3.4.2.3]

SUN: AlgorithmParameterGenerator.DSA -> 

[jira] [Updated] (KAFKA-3448) Support zone index in IPv6 regex

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3448:
---
Assignee: Soumyajit Sahu

> Support zone index in IPv6 regex
> 
>
> Key: KAFKA-3448
> URL: https://issues.apache.org/jira/browse/KAFKA-3448
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: Windows,Linux
>Reporter: Soumyajit Sahu
>Assignee: Soumyajit Sahu
> Fix For: 0.10.0.0
>
>
> When an address is written textually, the zone index is appended to the 
> address, separated by a percent sign (%). The actual syntax of zone indices 
> depends on the operating system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3647) Unable to set a ssl provider

2016-05-04 Thread Elvar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270401#comment-15270401
 ] 

Elvar commented on KAFKA-3647:
--

Yes, tried with OpenJDK 1.7 and 1.8 and the Oracle JRE with and without JCE. 

What is strange is that with Oracle JRE and JCE I can select ciphers in the 
Kafka config that I want and it will not give an error but when I do a sslscan 
on the Kafka SSL port no ciphers at all are found. Hinting that either the 
ciphers are hardcoded somewhere which I find doubtful or something is wrong 
with how the java keystore is created in my case resulting in only DSS ciphers 
being available for use. Will look into that and report back.

> Unable to set a ssl provider
> 
>
> Key: KAFKA-3647
> URL: https://issues.apache.org/jira/browse/KAFKA-3647
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.1
> Environment: Centos, OracleJRE 8, Vagrant
>Reporter: Elvar
>
> When defining a ssl provider Kafka does not start because the provider was 
> not found.
> {code}
> [2016-05-02 13:48:48,252] FATAL [Kafka Server 11], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.KafkaException: 
> java.security.NoSuchProviderException: no such provider: sun.security.ec.SunEC
> at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:44)
> {code}
> To test
> {code}
> /bin/kafka-server-start /etc/kafka/server.properties --override 
> ssl.provider=sun.security.ec.SunEC
> {code}
> This is stopping us from talking to Kafka with SSL from Go programs because 
> no common cipher suites are available.
> Using sslscan this is available from Kafka
> {code}
>  Supported Server Cipher(s):
>Accepted  TLSv1  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLSv1  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLSv1  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS11  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS11  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS11  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA256
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS12  128 bits  DHE-DSS-AES128-GCM-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS12  128 bits  EDH-DSS-DES-CBC3-SHA
>  Preferred Server Cipher(s):
>SSLv2  0 bits(NONE)
>TLSv1  256 bits  DHE-DSS-AES256-SHA
>TLS11  256 bits  DHE-DSS-AES256-SHA
>TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
> {code}
> From the Golang documentation these are avilable there
> {code}
> TLS_RSA_WITH_RC4_128_SHAuint16 = 0x0005
> TLS_RSA_WITH_3DES_EDE_CBC_SHA   uint16 = 0x000a
> TLS_RSA_WITH_AES_128_CBC_SHAuint16 = 0x002f
> TLS_RSA_WITH_AES_256_CBC_SHAuint16 = 0x0035
> TLS_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009c
> TLS_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009d
> TLS_ECDHE_ECDSA_WITH_RC4_128_SHAuint16 = 0xc007
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHAuint16 = 0xc009
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHAuint16 = 0xc00a
> TLS_ECDHE_RSA_WITH_RC4_128_SHA  uint16 = 0xc011
> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xc012
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA  uint16 = 0xc013
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA  uint16 = 0xc014
> TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256   uint16 = 0xc02f
> TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xc02b
> TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384   uint16 = 0xc030
> TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xc02c
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3147) Memory records is not writable in MirrorMaker

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3147.

Resolution: Fixed

Thanks [~omkreddy], closing again.

> Memory records is not writable in MirrorMaker
> -
>
> Key: KAFKA-3147
> URL: https://issues.apache.org/jira/browse/KAFKA-3147
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Meghana Narasimhan
>Assignee: Mayuresh Gharat
> Fix For: 0.10.0.0
>
>
> Hi,
> We are running a 3 node cluster (kafka version 0.9) and Node 0 also has a few 
> mirror makers running. 
> When we do a rolling restart of the cluster, the mirror maker shuts down with 
> the following errors.
> [2016-01-11 20:16:00,348] WARN Got error produce response with correlation id 
> 12491674 on topic-partition test-99, retrying (2147483646 attempts left). 
> Error: NOT_LEADER_FOR_PARTITION 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:00,853] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> java.lang.IllegalStateException: Memory records is not writable
> at 
> org.apache.kafka.common.record.MemoryRecords.append(MemoryRecords.java:93)
> at 
> org.apache.kafka.clients.producer.internals.RecordBatch.tryAppend(RecordBatch.java:69)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:168)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:435)
> at 
> kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:593)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:398)
> [2016-01-11 20:16:01,072] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-75, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-93, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-24, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:20,479] FATAL [mirrormaker-thread-0] Mirror maker thread 
> exited abnormally, stopping the whole mirror maker. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> Curious if the NOT_LEADER_FOR_PARTITION is because of a potential bug hinted 
> at in the thread , 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201505.mbox/%3ccajs3ho_u8s1xou_kudnfjamypjtmrjlw10qvkngn2yqkdan...@mail.gmail.com%3E
>
> And I think the mirror maker shuts down because of the 
> "abort.on.send.failure" which is set to true in our case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3147) Memory records is not writable in MirrorMaker

2016-05-04 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270385#comment-15270385
 ] 

Manikumar Reddy commented on KAFKA-3147:


Similar exception scenario is fixed in KAFKA-3594. Above logs indicates, this 
is due to KAFKA-3594.

> Memory records is not writable in MirrorMaker
> -
>
> Key: KAFKA-3147
> URL: https://issues.apache.org/jira/browse/KAFKA-3147
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Meghana Narasimhan
>Assignee: Mayuresh Gharat
> Fix For: 0.10.0.0
>
>
> Hi,
> We are running a 3 node cluster (kafka version 0.9) and Node 0 also has a few 
> mirror makers running. 
> When we do a rolling restart of the cluster, the mirror maker shuts down with 
> the following errors.
> [2016-01-11 20:16:00,348] WARN Got error produce response with correlation id 
> 12491674 on topic-partition test-99, retrying (2147483646 attempts left). 
> Error: NOT_LEADER_FOR_PARTITION 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:00,853] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> java.lang.IllegalStateException: Memory records is not writable
> at 
> org.apache.kafka.common.record.MemoryRecords.append(MemoryRecords.java:93)
> at 
> org.apache.kafka.clients.producer.internals.RecordBatch.tryAppend(RecordBatch.java:69)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:168)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:435)
> at 
> kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:593)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:398)
> [2016-01-11 20:16:01,072] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-75, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-93, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-24, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:20,479] FATAL [mirrormaker-thread-0] Mirror maker thread 
> exited abnormally, stopping the whole mirror maker. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> Curious if the NOT_LEADER_FOR_PARTITION is because of a potential bug hinted 
> at in the thread , 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201505.mbox/%3ccajs3ho_u8s1xou_kudnfjamypjtmrjlw10qvkngn2yqkdan...@mail.gmail.com%3E
>
> And I think the mirror maker shuts down because of the 
> "abort.on.send.failure" which is set to true in our case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2236) offset request reply racing with segment rolling

2016-05-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270380#comment-15270380
 ] 

ASF GitHub Bot commented on KAFKA-2236:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1318

KAFKA-2236; Offset request reply racing with segment rolling



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
KAFKA-2236-offset-request-reply-segment-rolling-race

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1318.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1318


commit 7d6d50cd377de82215856e22fc09e137a724acfc
Author: William Thurston 
Date:   2015-07-19T02:54:05Z

Addresses Jira Kafka-2236 to eliminate an array index out of bounds 
exception when segments are appended between checks




> offset request reply racing with segment rolling
> 
>
> Key: KAFKA-2236
> URL: https://issues.apache.org/jira/browse/KAFKA-2236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: Linux x86_64, java.1.7.0_72, discovered using librdkafka 
> based client.
>Reporter: Alfred Landrum
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: newbie
>
> My use case with kafka involves an aggressive retention policy that rolls 
> segment files frequently. My librdkafka based client sees occasional errors 
> to offset requests, showing up in the broker log like:
> [2015-06-02 02:33:38,047] INFO Rolled new log segment for 
> 'receiver-93b40462-3850-47c1-bcda-8a3e221328ca-50' in 1 ms. (kafka.log.Log)
> [2015-06-02 02:33:38,049] WARN [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis)
> java.lang.ArrayIndexOutOfBoundsException: 3
> at kafka.server.KafkaApis.fetchOffsetsBefore(KafkaApis.scala:469)
> at kafka.server.KafkaApis.fetchOffsets(KafkaApis.scala:449)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:411)
> at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:402)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.server.KafkaApis.handleOffsetRequest(KafkaApis.scala:402)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:61)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
> at java.lang.Thread.run(Thread.java:745)
> quoting Guozhang Wang's reply to my query on the users list:
> "I check the 0.8.2 code and may probably find a bug related to your issue.
> Basically, segsArray.last.size is called multiple times during handling
> offset requests, while segsArray.last could get concurrent appends. Hence
> it is possible that in line 461, if(segsArray.last.size > 0) returns false
> while later in line 468, if(segsArray.last.size > 0) could return true."
> http://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAHwHRrUK-3wdoEAaFbsD0E859Ea0gXixfxgDzF8E3%3D_8r7K%2Bpw%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2236; Offset request reply racing with s...

2016-05-04 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1318

KAFKA-2236; Offset request reply racing with segment rolling



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
KAFKA-2236-offset-request-reply-segment-rolling-race

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1318.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1318


commit 7d6d50cd377de82215856e22fc09e137a724acfc
Author: William Thurston 
Date:   2015-07-19T02:54:05Z

Addresses Jira Kafka-2236 to eliminate an array index out of bounds 
exception when segments are appended between checks




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (KAFKA-3147) Memory records is not writable in MirrorMaker

2016-05-04 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reopened KAFKA-3147:


Not clear if this has been completely fixed given the following comment in the 
PR:

{quote}
We patched 0.9.0.0 branch with this pull request. And tested patched version in 
our load test environment, after ~7 hours we have got same error:

java.lang.IllegalStateException: Memory records is not writable
at org.apache.kafka.common.record.MemoryRecords.append(MemoryRecords.java:93)
at 
org.apache.kafka.clients.producer.internals.RecordBatch.tryAppend(RecordBatch.java:69)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:168)

This always happens after next exceptions:
[kafka-producer-network-thread | audiLoadTest] WARN 
org.apache.kafka.clients.producer.internals.Sender - Got error produce response 
with correlation id 2274005 on topic-partition mct-3, retrying (1 attempts 
left). Error: NETWORK_EXCEPTION
{quote}

[~mgharat], thoughts?

> Memory records is not writable in MirrorMaker
> -
>
> Key: KAFKA-3147
> URL: https://issues.apache.org/jira/browse/KAFKA-3147
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Meghana Narasimhan
>Assignee: Mayuresh Gharat
> Fix For: 0.10.0.0
>
>
> Hi,
> We are running a 3 node cluster (kafka version 0.9) and Node 0 also has a few 
> mirror makers running. 
> When we do a rolling restart of the cluster, the mirror maker shuts down with 
> the following errors.
> [2016-01-11 20:16:00,348] WARN Got error produce response with correlation id 
> 12491674 on topic-partition test-99, retrying (2147483646 attempts left). 
> Error: NOT_LEADER_FOR_PARTITION 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:00,853] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> java.lang.IllegalStateException: Memory records is not writable
> at 
> org.apache.kafka.common.record.MemoryRecords.append(MemoryRecords.java:93)
> at 
> org.apache.kafka.clients.producer.internals.RecordBatch.tryAppend(RecordBatch.java:69)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:168)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:435)
> at 
> kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:593)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:398)
> [2016-01-11 20:16:01,072] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-75, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-93, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-24, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:20,479] FATAL [mirrormaker-thread-0] Mirror maker thread 
> exited abnormally, stopping the whole mirror maker. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> Curious if the NOT_LEADER_FOR_PARTITION is because of a potential bug hinted 
> at in the thread , 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201505.mbox/%3ccajs3ho_u8s1xou_kudnfjamypjtmrjlw10qvkngn2yqkdan...@mail.gmail.com%3E
>
> And I think the mirror maker shuts down because of the 
> "abort.on.send.failure" which is set to true in our case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3575) Use console consumer access topic that does not exist, can not use "Control + C" to exit process

2016-05-04 Thread Taiyuan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taiyuan Zhang reassigned KAFKA-3575:


Assignee: Taiyuan Zhang

> Use console consumer access topic that does not exist, can not use "Control + 
> C" to exit process
> 
>
> Key: KAFKA-3575
> URL: https://issues.apache.org/jira/browse/KAFKA-3575
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: SUSE Linux Enterprise Server 11 SP3
>Reporter: NieWang
>Assignee: Taiyuan Zhang
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> 1.  use "sh kafka-console-consumer.sh --zookeeper 10.252.23.133:2181 --topic 
> topic_02"  start console consumer. topic_02 does not exist.
> 2. you can not use "Control + C" to exit console consumer process. The 
> process is blocked.
> 3. use jstack check process stack, as follows:
> linux:~ # jstack 122967
> 2016-04-18 15:46:06
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.66-b17 mixed mode):
> "Attach Listener" #29 daemon prio=9 os_prio=0 tid=0x01781800 
> nid=0x1e0c8 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-4" #27 prio=5 os_prio=0 tid=0x018a4000 nid=0x1e08a waiting on 
> condition [0x7ffbe5ac]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe00ed3b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at kafka.tools.ConsoleConsumer$$anon$1.run(ConsoleConsumer.scala:101)
> "SIGINT handler" #28 daemon prio=9 os_prio=0 tid=0x019d5800 
> nid=0x1e089 in Object.wait() [0x7ffbe5bc1000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.$$YJP$$wait(Native Method)
> at java.lang.Object.wait(Object.java)
> at java.lang.Thread.join(Thread.java:1245)
> - locked <0xe71fd4e8> (a kafka.tools.ConsoleConsumer$$anon$1)
> at java.lang.Thread.join(Thread.java:1319)
> at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
> at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
> at java.lang.Shutdown.runHooks(Shutdown.java:123)
> at java.lang.Shutdown.sequence(Shutdown.java:167)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - locked <0xe00abfd8> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-2" #20 daemon prio=5 os_prio=0 
> tid=0x7ffbec77a800 nid=0x1e079 waiting on condition [0x7ffbe66c8000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-1" #19 daemon prio=5 os_prio=0 
> tid=0x7ffbec783000 nid=0x1e078 waiting on condition [0x7ffbe67c9000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>