Build failed in Jenkins: kafka-trunk-jdk8 #1056

2016-11-22 Thread Apache Jenkins Server
See 

Changes:

[becket.qin] KAFKA-4362; Consumer can fail after reassignment of the offsets 
topic

--
[...truncated 14428 lines...]

org.apache.kafka.streams.KafkaStreamsTest > 
shouldReturnFalseOnCloseWhenThreadsHaventTerminated PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetAllTasksWithStoreWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetAllTasksWithStoreWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
java.lang.AssertionError: Condition not met within timeout 3. waiting 
for store count-by-key
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:279)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:488)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 

[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15689102#comment-15689102
 ] 

huxi commented on KAFKA-4430:
-

A message larger than batch.size but smaller than max.request.size is 
acceptable as long as it's smaller than message.max.bytes (broker config) or 
max.message.bytes (topic config). In your current configuration, you will not 
see any errors thrown after you set the topic-level max.message.bytes to 2MB.

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1706

2016-11-22 Thread Apache Jenkins Server
See 

Changes:

[becket.qin] KAFKA-4362; Consumer can fail after reassignment of the offsets 
topic

--
[...truncated 14401 lines...]
org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldGetInstanceWithKeyWithMergedStreams PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldNotThrowNPEWhenOnChangeNotCalled STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldNotThrowNPEWhenOnChangeNotCalled PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStoreNameIsNull STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStoreNameIsNull PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldReturnNotAvailableWhenClusterIsEmpty STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldReturnNotAvailableWhenClusterIsEmpty PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldReturnEmptyCollectionOnGetAllInstancesWithStoreWhenStoreDoesntExist 
STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldReturnEmptyCollectionOnGetAllInstancesWithStoreWhenStoreDoesntExist PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldReturnNullOnGetWithKeyWhenStoreDoesntExist STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldReturnNullOnGetWithKeyWhenStoreDoesntExist PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStreamPartitionerIsNull STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStreamPartitionerIsNull PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldGetInstanceWithKeyAndCustomPartitioner STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldGetInstanceWithKeyAndCustomPartitioner PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStoreNameIsNullOnGetAllInstancesWithStore STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStoreNameIsNullOnGetAllInstancesWithStore PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 

[jira] [Commented] (KAFKA-1120) Controller could miss a broker state change

2016-11-22 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15689028#comment-15689028
 ] 

James Cheng commented on KAFKA-1120:


No, because broker 1 was up and running by 22:58:01,688. Broker 1 just sat 
there and did nothing, because the controller didn't send it any messages to 
follow anyone. It was not leader for any partitions, and did not attempt to 
follow anyone.

Here is the relevant timeline on broker 1:
{noformat}
[2016-11-22 22:55:18,814] INFO [Kafka Server 1], Starting controlled shutdown 
(kafka.server.KafkaServer)
[2016-11-22 22:57:11,437] INFO [Kafka Server 1], shut down completed 
(kafka.server.KafkaServer)
[2016-11-22 22:57:40,168] INFO starting (kafka.server.KafkaServer)
[2016-11-22 22:57:43,130] INFO Creating /brokers/ids/1 (is it secure? false) 
(kafka.utils.ZKCheckedEphemeral)
[2016-11-22 22:57:43,134] INFO Registered broker 1 at path /brokers/ids/1 with 
addresses: PLAINTEXT -> EndPoint(core01.tec2.tivo.com,9092,PLAINTEXT) 
(kafka.utils.ZkUtils)
{noformat}


> Controller could miss a broker state change 
> 
>
> Key: KAFKA-1120
> URL: https://issues.apache.org/jira/browse/KAFKA-1120
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jun Rao
>
> When the controller is in the middle of processing a task (e.g., preferred 
> leader election, broker change), it holds a controller lock. During this 
> time, a broker could have de-registered and re-registered itself in ZK. After 
> the controller finishes processing the current task, it will start processing 
> the logic in the broker change listener. However, it will see no broker 
> change and therefore won't do anything to the restarted broker. This broker 
> will be in a weird state since the controller doesn't inform it to become the 
> leader of any partition. Yet, the cached metadata in other brokers could 
> still list that broker as the leader for some partitions. Client requests 
> routed to that broker will then get a TopicOrPartitionNotExistException. This 
> broker will continue to be in this bad state until it's restarted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka consumers are not equally distributed

2016-11-22 Thread Sharninder
Could it be because of the partition key ?

On Wed, Nov 23, 2016 at 12:33 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:

> Hi there,
>
> We are doing the load test in Kafka with 25tps and first 9 hours it went
> fine almost 80K/hr messages were processed after that we see a lot of lags
> and we stopped the incoming load.
>
> Currently we see 15K/hr messages are processing. We have 40 consumer
> instances with concurrency 4 and 2 topics and both is having 160 partitions
> so each consumer with each partition.
>
> What we found that some of the partitions are sitting idle and some of are
> overloaded and its really slowing down the consumer message processing.
>
> Why rebalancing is not happening and existing messages are not distributed
> equally among the instances? We tried to restart the app still the same
> pace. Any idea what could be the reason?
>
> Thanks
> Achintya
>
>


-- 
--
Sharninder


[jira] [Commented] (KAFKA-4362) Consumer can fail after reassignment of the offsets topic partition

2016-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1566#comment-1566
 ] 

ASF GitHub Bot commented on KAFKA-4362:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2116


> Consumer can fail after reassignment of the offsets topic partition
> ---
>
> Key: KAFKA-4362
> URL: https://issues.apache.org/jira/browse/KAFKA-4362
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Joel Koshy
>Assignee: Mayuresh Gharat
> Fix For: 0.10.1.1
>
>
> When a consumer offsets topic partition reassignment completes, an offset 
> commit shows this:
> {code}
> java.lang.IllegalArgumentException: Message format version for partition 100 
> not found
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at scala.Option.getOrElse(Option.scala:120) ~[scala-library-2.10.4.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager.kafka$coordinator$GroupMetadataManager$$getMessageFormatVersionAndTimestamp(GroupMetadataManager.scala:632)
>  ~[kafka_2.10.jar:?]
> at 
> ...
> {code}
> The issue is that the replica has been deleted so the 
> {{GroupMetadataManager.getMessageFormatVersionAndTimestamp}} throws this 
> exception instead which propagates as an unknown error.
> Unfortunately consumers don't respond to this and will fail their offset 
> commits.
> One workaround in the above situation is to bounce the cluster - the consumer 
> will be forced to rediscover the group coordinator.
> (Incidentally, the message incorrectly prints the number of partitions 
> instead of the actual partition.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2116: KAFKA-4362 : Consumer can fail after reassignment ...

2016-11-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2116


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688708#comment-15688708
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

Previous Configuration - . Kafka AGG -> message.max.bytes 1MB , Topic1 -> 
max.message.bytes 1MB
Mirrormaker  - Batch size 500KB, message.request.size 100 Bytes. 
compression-gzip , ack -0 
Result: Seeing errors only on topic1on Kafka AGG . Rest of the topics 0,2...n 
are working fine. 

Current Configuration - Kafka AGG -> message.max.bytes 1MB , Topic1 -> 
max.message.bytes 2MB
Mirrormaker  - Batch size 500KB message.request.size 100 Bytes. 
compression-gzip , ack -0 
Result: Topic1 is no longer producing any errors on Kafka AGG. Messages of 
Topic1 are of size <= 100KB.

This looks like a bug. Why would mirrormaker send a message greater than the 
max.request.size or batch.size ? 


> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4433) Kafka Controller Does not send a LeaderAndIsr to old leader of a topicPartition during reassignment, if the old leader is not a part of the new assigned replicas

2016-11-22 Thread Mayuresh Gharat (JIRA)
Mayuresh Gharat created KAFKA-4433:
--

 Summary: Kafka Controller Does not send a LeaderAndIsr to old 
leader of a topicPartition during reassignment, if the old leader is not a part 
of the new assigned replicas
 Key: KAFKA-4433
 URL: https://issues.apache.org/jira/browse/KAFKA-4433
 Project: Kafka
  Issue Type: Bug
  Components: controller
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat
 Fix For: 0.10.2.0


Consider the following scenario :
old replicas : {[1,2,3], Leader = 1} is reassigned to new replicas : {[2,3,4], 
Leader = 2} 
In this case broker 1 does not receive a LeaderAndIsr request to become a 
follower.

This happens because of the following :
val (leaderAndIsr, replicas) = leaderSelector.selectLeader(topicAndPartition, 
currentLeaderAndIsr) in PartitionStateMachine.electLeaderForPartition(...) , 
the replicas returned by ReassignedPartitionLeaderSelector.selectLeader() is 
only the new Replicas, which are then sent the LeaderAndIsrRequest. So the old 
replica never receives this LeaderAndIsr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread Srinivas Dhruvakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688640#comment-15688640
 ] 

Srinivas Dhruvakumar commented on KAFKA-4430:
-

Thanks will try the above steps 

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4430) Broker logging "Topic and partition to exceptions: [topic,6] -> kafka.common.MessageSizeTooLargeException"

2016-11-22 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688610#comment-15688610
 ] 

huxi commented on KAFKA-4430:
-

Seems no debug statements can be enabled, but you could fire up a JConsole to 
check 'BytesRejectedPerSec' metrics which records the total bytes of rejected 
messages for a given topic. Besides, could you try to set 'message.max.bytes' 
to a much larger value, say, 2MB for instance to see if the problem still 
exists?

> Broker logging "Topic and partition to exceptions: [topic,6] -> 
> kafka.common.MessageSizeTooLargeException"
> --
>
> Key: KAFKA-4430
> URL: https://issues.apache.org/jira/browse/KAFKA-4430
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
> Environment: Production 
>Reporter: Srinivas Dhruvakumar
>  Labels: newbie
>
> I have a setup as below 
> DC Kafka 
> Mirrormaker 
> Aggregate Kafka
> Here is the following settings. I have set the max.message.bytes to 1M Bytes 
> on DC and AGG kafka side. Mirrormaker producer settings --  batch.size is set 
> to 500 K Bytes and max.request.size is set to 1 M Byte and ack to 0 , 
> compression-> gzip . 
> However on the Aggregate Kafka I get the following exception 
> Closing connection due to error during produce request with correlation id 
> 414156659 from client id producer-1 with ack=0
> Topic and partition to exceptions: [topic1,6] -> 
> kafka.common.MessageSizeTooLargeException
> Is this a bug or why would this happen. I have configured mirrormaker to send 
> messages less than 1 M Bytes . Are the messages getting dropped ? Under what 
> circumstances this error occurs ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4021) system tests need to enable trace level logging for controller and state-change log

2016-11-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688546#comment-15688546
 ] 

Jun Rao commented on KAFKA-4021:


[~ewencp], those trace level logging only happens when there are broker 
failures or restarts, which should be rare. If we do performance testing on a 
steady cluster, the performance shouldn't be impacted.

> system tests need to enable trace level logging for controller and 
> state-change log
> ---
>
> Key: KAFKA-4021
> URL: https://issues.apache.org/jira/browse/KAFKA-4021
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Geoff Anderson
>
> We store detailed information about leader changes at trace level in the 
> controller and the state-change log. Currently, our system tests only collect 
> debug level logs. It would be useful to collect trace level logging for these 
> two logs and archive them if there are test failures, at least for 
> replication related tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1120) Controller could miss a broker state change

2016-11-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325496#comment-15325496
 ] 

Jun Rao edited comment on KAFKA-1120 at 11/23/16 1:03 AM:
--

A better way is probably for the controller to store the czxid (which is 
guaranteed to be unique and monotonically increasing) of the broker 
registration path. When a ZK watcher is fired, the controller can read the 
current czxid of each of the broker registration and see if it has changed. If 
so, the controller will treat the broker as it has failed and then restarted.


was (Author: junrao):
A better way is probably for the controller to store the ZK version of the 
broker registration path. When a ZK watcher is fired, the controller can read 
the current ZK version of each of the broker registration and see if it has 
changed. If so, the controller will treat the broker as it has failed and then 
restarted.

> Controller could miss a broker state change 
> 
>
> Key: KAFKA-1120
> URL: https://issues.apache.org/jira/browse/KAFKA-1120
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jun Rao
>
> When the controller is in the middle of processing a task (e.g., preferred 
> leader election, broker change), it holds a controller lock. During this 
> time, a broker could have de-registered and re-registered itself in ZK. After 
> the controller finishes processing the current task, it will start processing 
> the logic in the broker change listener. However, it will see no broker 
> change and therefore won't do anything to the restarted broker. This broker 
> will be in a weird state since the controller doesn't inform it to become the 
> leader of any partition. Yet, the cached metadata in other brokers could 
> still list that broker as the leader for some partitions. Client requests 
> routed to that broker will then get a TopicOrPartitionNotExistException. This 
> broker will continue to be in this bad state until it's restarted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1120) Controller could miss a broker state change

2016-11-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688448#comment-15688448
 ] 

Jun Rao commented on KAFKA-1120:


[~wushujames], it seems that the controller did detect that broker came up.

[2016-11-22 22:57:43,359] INFO [BrokerChangeListener on Controller 4]: Newly 
added brokers: 1, deleted brokers: , all live brokers: 1,2,3,4,5 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)

The following log entries seem to be caused by broker 1 being controlled 
shutdown again.

[2016-11-22 22:57:50,218] DEBUG [Controller 4]: All shutting down brokers: 1 
(kafka.controller.KafkaController)
[2016-11-22 22:57:50,218] DEBUG [Controller 4]: Live brokers: 5,2,3,4 
(kafka.controller.KafkaController)
[2016-11-22 22:58:01,668] DEBUG [Controller 4]: All shutting down brokers: 1 
(kafka.controller.KafkaController)
[2016-11-22 22:58:01,668] DEBUG [Controller 4]: Live brokers: 5,2,3,4 
(kafka.controller.KafkaController)


> Controller could miss a broker state change 
> 
>
> Key: KAFKA-1120
> URL: https://issues.apache.org/jira/browse/KAFKA-1120
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jun Rao
>
> When the controller is in the middle of processing a task (e.g., preferred 
> leader election, broker change), it holds a controller lock. During this 
> time, a broker could have de-registered and re-registered itself in ZK. After 
> the controller finishes processing the current task, it will start processing 
> the logic in the broker change listener. However, it will see no broker 
> change and therefore won't do anything to the restarted broker. This broker 
> will be in a weird state since the controller doesn't inform it to become the 
> leader of any partition. Yet, the cached metadata in other brokers could 
> still list that broker as the leader for some partitions. Client requests 
> routed to that broker will then get a TopicOrPartitionNotExistException. This 
> broker will continue to be in this bad state until it's restarted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4271) The console consumer fails on Windows with new consumer is used

2016-11-22 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15684468#comment-15684468
 ] 

Vahid Hashemian edited comment on KAFKA-4271 at 11/23/16 12:21 AM:
---

[~huxi_2b] Thanks for responding. I checked the RAM, and out of 4 GB available 
there's plenty left (about half) when I run the consumer.
I also set the config you mentioned to 4 times the default size, but no luck. I 
still get the same error.


was (Author: vahid):
[~huxi] Thanks for responding. I checked the RAM, and out of 4 GB available 
there's plenty left (about half) when I run the consumer.
I also set the config you mentioned to 4 times the default size, but no luck. I 
still get the same error.

> The console consumer fails on Windows with new consumer is used 
> 
>
> Key: KAFKA-4271
> URL: https://issues.apache.org/jira/browse/KAFKA-4271
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Vahid Hashemian
>
> When I try to consume message using the new consumer (Quickstart Step 5) I 
> get an exception on the broker side. The old consumer works fine.
> {code}
> java.io.IOException: Map failed
> at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> at kafka.log.AbstractIndex.(AbstractIndex.scala:61)
> at kafka.log.OffsetIndex.(OffsetIndex.scala:51)
> at kafka.log.LogSegment.(LogSegment.scala:67)
> at kafka.log.Log.loadSegments(Log.scala:255)
> at kafka.log.Log.(Log.scala:108)
> at kafka.log.LogManager.createLog(LogManager.scala:362)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:94)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242)
> at kafka.cluster.Partition.makeLeader(Partition.scala:168)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:740)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:739)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:739)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:685)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> ... 29 more
> {code}
> This issue seems to break the broker and I have to clear out the logs so I 
> can bring the broker back up again.
> Update: This issue seems to occur on 32-bit Windows only. I tried this on a 
> Windows 32-bit VM. On a 64-bit machine I did not get the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2273) Add rebalance with a minimal number of reassignments to server-defined strategy list

2016-11-22 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-2273:
---
Status: In Progress  (was: Patch Available)

KIP-54 voting in progress

> Add rebalance with a minimal number of reassignments to server-defined 
> strategy list
> 
>
> Key: KAFKA-2273
> URL: https://issues.apache.org/jira/browse/KAFKA-2273
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Olof Johansson
>Assignee: Vahid Hashemian
>  Labels: newbie++, newbiee
> Fix For: 0.10.2.0
>
>
> Add a new partitions.assignment.strategy to the server-defined list that will 
> do reassignments based on moving as few partitions as possible. This should 
> be a quite common reassignment strategy especially for the cases where the 
> consumer has to maintain state, either in memory, or on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4392) Failed to lock the state directory due to an unexpected exception

2016-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688329#comment-15688329
 ] 

ASF GitHub Bot commented on KAFKA-4392:
---

GitHub user guozhangwang reopened a pull request:

https://github.com/apache/kafka/pull/2121

KAFKA-4392: Fix race condition in directory cleanup



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K4392-race-dir-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2121


commit ab0c24ec4c90e71d45ed1d8dcd96af65a1df59d6
Author: Guozhang Wang 
Date:   2016-11-10T21:39:31Z

quick fix

commit 157a2e39e539ba8bd28fa09f6f0a800fde1a22f6
Author: Guozhang Wang 
Date:   2016-11-22T18:49:11Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K4392-race-dir-cleanup




> Failed to lock the state directory due to an unexpected exception
> -
>
> Key: KAFKA-4392
> URL: https://issues.apache.org/jira/browse/KAFKA-4392
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Ara Ebrahimi
>Assignee: Guozhang Wang
>
> This happened on streaming startup, on a clean installation, no existing 
> folder. Here I was starting 4 instances of our streaming app on 4 machines 
> and one threw this exception. Seems to me there’s a race condition somewhere 
> when instances discover others, or something like that.
> 2016-11-02 15:43:47 INFO  StreamRunner:59 - Started http server successfully.
> 2016-11-02 15:44:50 ERROR StateDirectory:147 - Failed to lock the state 
> directory due to an unexpected exception
> java.nio.file.NoSuchFileException: 
> /data/1/kafka-streams/myapp-streams/7_21/.lock
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
>   at java.nio.channels.FileChannel.open(FileChannel.java:287)
>   at java.nio.channels.FileChannel.open(FileChannel.java:335)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.getOrCreateFileChannel(StateDirectory.java:176)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.lock(StateDirectory.java:90)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:140)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeClean(StreamThread.java:552)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:459)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> ^C
> [arae@a4 ~]$ ls -al /data/1/kafka-streams/myapp-streams/7_21/
> ls: cannot access /data/1/kafka-streams/myapp-streams/7_21/: No such file or 
> directory
> [arae@a4 ~]$ ls -al /data/1/kafka-streams/myapp-streams/
> total 4
> drwxr-xr-x 74 root root 4096 Nov  2 15:44 .
> drwxr-xr-x  3 root root   27 Nov  2 15:43 ..
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_1
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_13
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_14
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_16
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_2
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_22
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_28
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_3
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_31
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_5
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_7
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_8
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_9
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_1
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_10
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_14
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_15
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_16
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_17
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_18
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_3
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_5
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_1
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_10
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_12
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_20
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_24
> drwxr-xr-x  3 root root   

[jira] [Commented] (KAFKA-4392) Failed to lock the state directory due to an unexpected exception

2016-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688328#comment-15688328
 ] 

ASF GitHub Bot commented on KAFKA-4392:
---

Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/2121


> Failed to lock the state directory due to an unexpected exception
> -
>
> Key: KAFKA-4392
> URL: https://issues.apache.org/jira/browse/KAFKA-4392
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Ara Ebrahimi
>Assignee: Guozhang Wang
>
> This happened on streaming startup, on a clean installation, no existing 
> folder. Here I was starting 4 instances of our streaming app on 4 machines 
> and one threw this exception. Seems to me there’s a race condition somewhere 
> when instances discover others, or something like that.
> 2016-11-02 15:43:47 INFO  StreamRunner:59 - Started http server successfully.
> 2016-11-02 15:44:50 ERROR StateDirectory:147 - Failed to lock the state 
> directory due to an unexpected exception
> java.nio.file.NoSuchFileException: 
> /data/1/kafka-streams/myapp-streams/7_21/.lock
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
>   at java.nio.channels.FileChannel.open(FileChannel.java:287)
>   at java.nio.channels.FileChannel.open(FileChannel.java:335)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.getOrCreateFileChannel(StateDirectory.java:176)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.lock(StateDirectory.java:90)
>   at 
> org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:140)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeClean(StreamThread.java:552)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:459)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> ^C
> [arae@a4 ~]$ ls -al /data/1/kafka-streams/myapp-streams/7_21/
> ls: cannot access /data/1/kafka-streams/myapp-streams/7_21/: No such file or 
> directory
> [arae@a4 ~]$ ls -al /data/1/kafka-streams/myapp-streams/
> total 4
> drwxr-xr-x 74 root root 4096 Nov  2 15:44 .
> drwxr-xr-x  3 root root   27 Nov  2 15:43 ..
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_1
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_13
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_14
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_16
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_2
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_22
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_28
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_3
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_31
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_5
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_7
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_8
> drwxr-xr-x  3 root root   32 Nov  2 15:43 0_9
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_1
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_10
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_14
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_15
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_16
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_17
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_18
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_3
> drwxr-xr-x  3 root root   32 Nov  2 15:43 1_5
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_1
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_10
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_12
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_20
> drwxr-xr-x  3 root root   60 Nov  2 15:43 2_24
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_10
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_11
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_19
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_20
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_25
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_26
> drwxr-xr-x  3 root root   61 Nov  2 15:43 3_3
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_11
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_12
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_18
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_19
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_24
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_25
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_26
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_4
> drwxr-xr-x  3 root root   64 Nov  2 15:43 4_9
> drwxr-xr-x  3 root root   58 Nov  2 15:43 5_1
> drwxr-xr-x  3 root root   58 Nov  2 15:43 5_10

[GitHub] kafka pull request #2121: KAFKA-4392: Fix race condition in directory cleanu...

2016-11-22 Thread guozhangwang
GitHub user guozhangwang reopened a pull request:

https://github.com/apache/kafka/pull/2121

KAFKA-4392: Fix race condition in directory cleanup



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K4392-race-dir-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2121


commit ab0c24ec4c90e71d45ed1d8dcd96af65a1df59d6
Author: Guozhang Wang 
Date:   2016-11-10T21:39:31Z

quick fix

commit 157a2e39e539ba8bd28fa09f6f0a800fde1a22f6
Author: Guozhang Wang 
Date:   2016-11-22T18:49:11Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K4392-race-dir-cleanup




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2121: KAFKA-4392: Fix race condition in directory cleanu...

2016-11-22 Thread guozhangwang
Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/2121


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #32: Added section for Kafka Summit on the Events page

2016-11-22 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/32
  
LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4095) When a topic is deleted and then created with the same name, 'committed' offsets are not reset

2016-11-22 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4095:
---
Status: Patch Available  (was: Open)

> When a topic is deleted and then created with the same name, 'committed' 
> offsets are not reset
> --
>
> Key: KAFKA-4095
> URL: https://issues.apache.org/jira/browse/KAFKA-4095
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0, 0.9.0.1
>Reporter: Alex Glikson
>Assignee: Vahid Hashemian
>
> I encountered a very strange behavior of Kafka, which seems to be a bug.
> After deleting a topic and re-creating it with the same name, I produced 
> certain amount of new messages, and then opened a consumer with the same ID 
> that I used before re-creating the topic (with auto.commit=false, 
> auto.offset.reset=earliest). While the latest offsets seemed up to date, the 
> *committed* offset (returned by committed() method) was an *old* offset, from 
> the time before the topic has been deleted and created.
> I would have assumed that when a topic is deleted, all the associated 
> topic-partitions and consumer groups are recycled too.
> I am using the Java client version 0.9, with Kafka server 0.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1120) Controller could miss a broker state change

2016-11-22 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688295#comment-15688295
 ] 

James Cheng commented on KAFKA-1120:


I believe we ran into this today.

{noformat}
core@core04 $ grep brokers controller.log.2016-11-22-22
[2016-11-22 22:50:32,883] INFO [Controller 4]: Currently active brokers in the 
cluster: Set(1, 3, 4, 5) (kafka.controller.KafkaController)
[2016-11-22 22:50:32,883] INFO [Controller 4]: Currently shutting brokers in 
the cluster: Set() (kafka.controller.KafkaController)
[2016-11-22 22:51:44,601] INFO [BrokerChangeListener on Controller 4]: Broker 
change listener fired for path /brokers/ids with children 1,2,3,4,5 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)
[2016-11-22 22:51:44,607] INFO [BrokerChangeListener on Controller 4]: Newly 
added brokers: 2, deleted brokers: , all live brokers: 1,2,3,4,5 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)
[2016-11-22 22:55:18,831] DEBUG [Controller 4]: All shutting down brokers: 1 
(kafka.controller.KafkaController)
[2016-11-22 22:55:18,831] DEBUG [Controller 4]: Live brokers: 5,2,3,4 
(kafka.controller.KafkaController)
[2016-11-22 22:57:11,791] INFO [BrokerChangeListener on Controller 4]: Broker 
change listener fired for path /brokers/ids with children 2,3,4,5 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)
[2016-11-22 22:57:11,980] INFO [BrokerChangeListener on Controller 4]: Newly 
added brokers: , deleted brokers: 1, all live brokers: 2,3,4,5 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)
[2016-11-22 22:57:11,985] INFO [Controller 4]: Removed ArrayBuffer(1) from list 
of shutting down brokers. (kafka.controller.KafkaController)
[2016-11-22 22:57:43,133] INFO [BrokerChangeListener on Controller 4]: Broker 
change listener fired for path /brokers/ids with children 1,2,3,4,5 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)
[2016-11-22 22:57:43,359] INFO [BrokerChangeListener on Controller 4]: Newly 
added brokers: 1, deleted brokers: , all live brokers: 1,2,3,4,5 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener)
[2016-11-22 22:57:50,218] DEBUG [Controller 4]: All shutting down brokers: 1 
(kafka.controller.KafkaController)
[2016-11-22 22:57:50,218] DEBUG [Controller 4]: Live brokers: 5,2,3,4 
(kafka.controller.KafkaController)
[2016-11-22 22:58:01,668] DEBUG [Controller 4]: All shutting down brokers: 1 
(kafka.controller.KafkaController)
[2016-11-22 22:58:01,668] DEBUG [Controller 4]: Live brokers: 5,2,3,4 
(kafka.controller.KafkaController)
core@core04 $
{noformat}

At 2016-11-22 22:57:11,791, broker 1 went away, and the controller noticed it.
At 2016-11-22 22:57:43,133, broker 1 came back, and the controller noticed it.
At 2016-11-22 22:57:50,218, the controller said it was "done" with stuff, and 
it doesn't seem to know about broker 1, even though broker 1 is running

> Controller could miss a broker state change 
> 
>
> Key: KAFKA-1120
> URL: https://issues.apache.org/jira/browse/KAFKA-1120
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jun Rao
>
> When the controller is in the middle of processing a task (e.g., preferred 
> leader election, broker change), it holds a controller lock. During this 
> time, a broker could have de-registered and re-registered itself in ZK. After 
> the controller finishes processing the current task, it will start processing 
> the logic in the broker change listener. However, it will see no broker 
> change and therefore won't do anything to the restarted broker. This broker 
> will be in a weird state since the controller doesn't inform it to become the 
> leader of any partition. Yet, the cached metadata in other brokers could 
> still list that broker as the leader for some partitions. Client requests 
> routed to that broker will then get a TopicOrPartitionNotExistException. This 
> broker will continue to be in this bad state until it's restarted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4095) When a topic is deleted and then created with the same name, 'committed' offsets are not reset

2016-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688291#comment-15688291
 ] 

ASF GitHub Bot commented on KAFKA-4095:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/2160

KAFKA-4095: Remove topic offsets and owners from ZK consumer groups upon 
topic deletion



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4095

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2160.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2160


commit 953f4ccb5f6009c4198e1aadb4e0d5758299fdda
Author: Vahid Hashemian 
Date:   2016-11-22T23:25:41Z

KAFKA-4095: Remove topic offsets and owners from ZK consumer groups upon 
topic deletion




> When a topic is deleted and then created with the same name, 'committed' 
> offsets are not reset
> --
>
> Key: KAFKA-4095
> URL: https://issues.apache.org/jira/browse/KAFKA-4095
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Alex Glikson
>Assignee: Vahid Hashemian
>
> I encountered a very strange behavior of Kafka, which seems to be a bug.
> After deleting a topic and re-creating it with the same name, I produced 
> certain amount of new messages, and then opened a consumer with the same ID 
> that I used before re-creating the topic (with auto.commit=false, 
> auto.offset.reset=earliest). While the latest offsets seemed up to date, the 
> *committed* offset (returned by committed() method) was an *old* offset, from 
> the time before the topic has been deleted and created.
> I would have assumed that when a topic is deleted, all the associated 
> topic-partitions and consumer groups are recycled too.
> I am using the Java client version 0.9, with Kafka server 0.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2160: KAFKA-4095: Remove topic offsets and owners from Z...

2016-11-22 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/2160

KAFKA-4095: Remove topic offsets and owners from ZK consumer groups upon 
topic deletion



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4095

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2160.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2160


commit 953f4ccb5f6009c4198e1aadb4e0d5758299fdda
Author: Vahid Hashemian 
Date:   2016-11-22T23:25:41Z

KAFKA-4095: Remove topic offsets and owners from ZK consumer groups upon 
topic deletion




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4273) Streams DSL - Add TTL / retention period support for intermediate topics and state stores

2016-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15688015#comment-15688015
 ] 

ASF GitHub Bot commented on KAFKA-4273:
---

GitHub user dpoldrugo opened a pull request:

https://github.com/apache/kafka/pull/2159

KAFKA 4273 - Add TTL support for RocksDB

Since Streams DSL doesn't support fine grained configurations of state 
stores (it usese only RocksDB) - I have added  new StreamsConfig called 
`rocksdb.ttl.sec` - which allows you to set TTL for all state stores used by 
the topology. To make short, if you set property to a value `>=1`, it will use 
TtlDB instead of RocksDB and this will lead to records getting expired after 
this defined period.

This should help users to bound their disk usage and provide a 
configuration for use cases where your data has natural TTL/retention. For 
example, when you process data only for one hour, and after that you don't need 
the data in state stores anymore.

I have added 
[test](https://github.com/apache/kafka/compare/trunk...dpoldrugo:KAFKA-4273-ttl-support?expand=1#diff-d908a80c770d196ac823752da3b3a864R117)
 to check if TtlDB is expiring record, but I can't make TtlDB expire record 
within a reasonable windows (1 minute). Do you have any suggestions how to 
force TtlDB to expire records more quickly?

Since I'm using Kafka and Kafka Streams 0.10.1.0, I have also added this 
code to the 
[0.10.1](https://github.com/dpoldrugo/kafka/tree/0.10.1-KAFKA-4273-ttl-support) 
branch, and if the review goes well I hope it can be added to the 0.10.1.1 
release.
The patch is here: 
[KAFKA_4273_Add_TTL_support_for_RocksDB_v2.patch.txt](https://github.com/apache/kafka/files/607638/KAFKA_4273_Add_TTL_support_for_RocksDB_v2.patch.txt)

**Suggestion for future work**
Since this config/feature applies to all state stores, it would be nice to 
provide an API for users to configure TTL for every state store, for example 
during toplogy building with KStreamBuilder.

Now: KStreamBuilder#table(String topic, final String storeName)
Suggestion: KStreamBuilder#table(String topic, final String storeName, 
**_StoreOptions_ storeOptions**)
Where **_StoreOptions_** would be something like this: `{ ttlSeconds: int }`

More details: [KAFKA-4273](https://issues.apache.org/jira/browse/KAFKA-4273)

@guozhangwang @dguy @mjsax @norwood @enothereska @ijuma- could you 
check this out?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dpoldrugo/kafka KAFKA-4273-ttl-support

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2159.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2159


commit 5a3f1372daf2a0e939b246756c7e712e9ea21662
Author: dpoldrugo 
Date:   2016-11-22T21:01:02Z

KAFKA 4273 - Add TTL support for RocksDB




> Streams DSL - Add TTL / retention period support for intermediate topics and 
> state stores
> -
>
> Key: KAFKA-4273
> URL: https://issues.apache.org/jira/browse/KAFKA-4273
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Davor Poldrugo
>
> Hi!
> I'm using Streams DSL (0.10.0.1), which can only use RocksDB for local state 
> as far as I know - it's not configurable.
> In my use case my data has TTL / retnetion period. It's 48 hours. After that 
> - data can be discarded.
> I join two topics: "messages" and "prices" using windowed inner join.
> The two intermediate Kafka topics for this join are named:
>  * messages-prices-join-this-changelog
>  * messages-prices-join-other-changelog
> Since these topics are created as compacted by Kafka Streams, and I don't 
> wan't to keep data forever, I have altered them to not use compaction. Right 
> now my RocksDB state stores grow indefinitely, and I don't have any options 
> to define TTL, or somehow periodically clean the older data.
> A "hack" that I use to keep my disk usage low - I have schedulled a job to 
> periodically stop Kafka Streams instances - one at the time. This triggers a 
> rebalance, and partitions migrate to other instances. When the instance is 
> started again, there's another rebalance, and sometimes this instance starts 
> processing partitions that wasn't processing before the stop - which leads to 
> deletion of the RocksDB state store for those partitions 
> (state.cleanup.delay.ms). In the next rebalance the local store is recreated 
> with a restore consumer - which reads data from - as previously mentioned - a 
> non compacted topic. And this 

[GitHub] kafka pull request #2159: KAFKA 4273 - Add TTL support for RocksDB

2016-11-22 Thread dpoldrugo
GitHub user dpoldrugo opened a pull request:

https://github.com/apache/kafka/pull/2159

KAFKA 4273 - Add TTL support for RocksDB

Since Streams DSL doesn't support fine grained configurations of state 
stores (it usese only RocksDB) - I have added  new StreamsConfig called 
`rocksdb.ttl.sec` - which allows you to set TTL for all state stores used by 
the topology. To make short, if you set property to a value `>=1`, it will use 
TtlDB instead of RocksDB and this will lead to records getting expired after 
this defined period.

This should help users to bound their disk usage and provide a 
configuration for use cases where your data has natural TTL/retention. For 
example, when you process data only for one hour, and after that you don't need 
the data in state stores anymore.

I have added 
[test](https://github.com/apache/kafka/compare/trunk...dpoldrugo:KAFKA-4273-ttl-support?expand=1#diff-d908a80c770d196ac823752da3b3a864R117)
 to check if TtlDB is expiring record, but I can't make TtlDB expire record 
within a reasonable windows (1 minute). Do you have any suggestions how to 
force TtlDB to expire records more quickly?

Since I'm using Kafka and Kafka Streams 0.10.1.0, I have also added this 
code to the 
[0.10.1](https://github.com/dpoldrugo/kafka/tree/0.10.1-KAFKA-4273-ttl-support) 
branch, and if the review goes well I hope it can be added to the 0.10.1.1 
release.
The patch is here: 
[KAFKA_4273_Add_TTL_support_for_RocksDB_v2.patch.txt](https://github.com/apache/kafka/files/607638/KAFKA_4273_Add_TTL_support_for_RocksDB_v2.patch.txt)

**Suggestion for future work**
Since this config/feature applies to all state stores, it would be nice to 
provide an API for users to configure TTL for every state store, for example 
during toplogy building with KStreamBuilder.

Now: KStreamBuilder#table(String topic, final String storeName)
Suggestion: KStreamBuilder#table(String topic, final String storeName, 
**_StoreOptions_ storeOptions**)
Where **_StoreOptions_** would be something like this: `{ ttlSeconds: int }`

More details: [KAFKA-4273](https://issues.apache.org/jira/browse/KAFKA-4273)

@guozhangwang @dguy @mjsax @norwood @enothereska @ijuma- could you 
check this out?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dpoldrugo/kafka KAFKA-4273-ttl-support

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2159.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2159


commit 5a3f1372daf2a0e939b246756c7e712e9ea21662
Author: dpoldrugo 
Date:   2016-11-22T21:01:02Z

KAFKA 4273 - Add TTL support for RocksDB




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1705

2016-11-22 Thread Apache Jenkins Server
See 



[GitHub] kafka-site pull request #32: Added section for Kafka Summit on the Events pa...

2016-11-22 Thread derrickdoo
GitHub user derrickdoo opened a pull request:

https://github.com/apache/kafka-site/pull/32

Added section for Kafka Summit on the Events page

Added section for Kafka Summit on the Events page

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/derrickdoo/kafka-site eventPageUpdates

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/32.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #32


commit bf443fe5dcb423f593d9f7988fd8976d6ce2a1fd
Author: Derrick Or 
Date:   2016-11-09T23:35:51Z

added new meetup links and kafka summit to events page




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1055

2016-11-22 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4355: Skip topics that have no partitions

--
[...truncated 12149 lines...]
org.apache.kafka.common.record.RecordTest > testChecksum[190] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[190] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[190] PASSED

org.apache.kafka.common.record.RecordTest > testFields[190] STARTED

org.apache.kafka.common.record.RecordTest > testFields[190] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[191] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[191] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[191] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[191] PASSED

org.apache.kafka.common.record.RecordTest > testFields[191] STARTED

org.apache.kafka.common.record.RecordTest > testFields[191] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[0] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[0] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[1] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[1] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[2] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[2] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[3] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[3] PASSED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testSerializationRoundtrip STARTED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testSerializationRoundtrip PASSED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testTopiPartitionSerializationCompatibility STARTED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testTopiPartitionSerializationCompatibility PASSED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdLow STARTED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdLow PASSED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdHigh 
STARTED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdHigh PASSED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionName STARTED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionName PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionDefault STARTED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionDefault PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueExceptions STARTED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueExceptions PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionInheritance 
STARTED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionInheritance PASSED

org.apache.kafka.common.protocol.ErrorsTest > testNoneException STARTED

org.apache.kafka.common.protocol.ErrorsTest > testNoneException PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueErrorCodes STARTED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueErrorCodes PASSED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionsAreNotGeneric 
STARTED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionsAreNotGeneric PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testToString 
STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testToString 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadStringSizeTooLarge STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadStringSizeTooLarge PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testNullableDefault STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testNullableDefault PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeStringSize STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeStringSize PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadArraySizeTooLarge STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadArraySizeTooLarge PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeBytesSize STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeBytesSize PASSED


Re: Restrict consumers from connecting to Kafka cluster

2016-11-22 Thread Ofir Manor
Check the Security section of the documentation, especially authorization
(which means also authentication)
http://kafka.apache.org/documentation.html

Ofir Manor

Co-Founder & CTO | Equalum

Mobile: +972-54-7801286 | Email: ofir.ma...@equalum.io

On Tue, Nov 22, 2016 at 9:13 PM, ravi singh  wrote:

> Is it possible to restrict Kafka consumers from consuming from a given
>  Kafka cluster?
>
> --
> *Regards,*
> *Ravi*
>


Restrict consumers from connecting to Kafka cluster

2016-11-22 Thread ravi singh
Is it possible to restrict Kafka consumers from consuming from a given
 Kafka cluster?

-- 
*Regards,*
*Ravi*


Kafka consumers are not equally distributed

2016-11-22 Thread Ghosh, Achintya (Contractor)
Hi there,

We are doing the load test in Kafka with 25tps and first 9 hours it went fine 
almost 80K/hr messages were processed after that we see a lot of lags and we 
stopped the incoming load.

Currently we see 15K/hr messages are processing. We have 40 consumer instances 
with concurrency 4 and 2 topics and both is having 160 partitions so each 
consumer with each partition.

What we found that some of the partitions are sitting idle and some of are 
overloaded and its really slowing down the consumer message processing.

Why rebalancing is not happening and existing messages are not distributed 
equally among the instances? We tried to restart the app still the same pace. 
Any idea what could be the reason?

Thanks
Achintya



Re: should HeartbeatThread be a daemon thread?

2016-11-22 Thread radai
a similar issue exists with the old client stack as well -
https://github.com/apache/kafka/pull/1930

On Mon, Nov 21, 2016 at 9:10 PM, Jason Gustafson  wrote:

> Hey David,
>
> It probably should be a daemon thread. Perhaps open a JIRA?
>
> Thanks,
> Jason
>
> On Mon, Nov 21, 2016 at 2:03 PM, David Judd 
> wrote:
>
> > Hi folks,
> >
> > We're seeing an issue where an exception inside the main processing loop
> of
> > a consumer doesn't cause the JVM to exit, as expected (and, in our case,
> > desired). From the thread dump, it appears that what's blocking exit is
> the
> > "kafka-coordinator-heartbeat-thread". From what I understand of what it
> > does, it seems to me like this should be a daemon thread, but it's not.
> Is
> > this a bug, or deliberate?
> >
> > Thanks,
> > David
> >
>


[jira] [Commented] (KAFKA-4322) StateRestoreCallback begin and end indication

2016-11-22 Thread Mark Shelton (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15687516#comment-15687516
 ] 

Mark Shelton commented on KAFKA-4322:
-

The logging is the same for us and logging framework is not the concern.

We know some domain-specific information about the keys and would like to, at 
the end of state restore, for example report how many keys of a given type were 
added and deleted. Having the begin and end callback makes this much easier as 
the begin callback can allocate the structure for statistics and the end 
callback can log the breakdown of keys added/removed per type, subtype etc.. My 
app has domain specific knowledge of keys that streams won't have.


> StateRestoreCallback begin and end indication
> -
>
> Key: KAFKA-4322
> URL: https://issues.apache.org/jira/browse/KAFKA-4322
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Mark Shelton
>Assignee: Mark Shelton
>Priority: Minor
>
> In Kafka Streams, the StateRestoreCallback interface provides only a single 
> method "restore(byte[] key, byte[] value)" that is called for every key-value 
> pair to be restored. 
> It would be nice to have "beginRestore" and "endRestore" methods as part of 
> StateRestoreCallback.
> Kafka Streams would call "beginRestore" before restoring any keys, and would 
> call "endRestore" when it determines that it is done. This allows an 
> implementation, for example, to report on the number of keys restored and 
> perform a commit after the last key was restored. Other uses are conceivable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4322) StateRestoreCallback begin and end indication

2016-11-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15687453#comment-15687453
 ] 

Guozhang Wang edited comment on KAFKA-4322 at 11/22/16 6:09 PM:


I see. Just for clarification for "own logging" are you just adding logging 
statement in your code and are only looking at those or you are using a 
different logging framework than log4j? For the latter case I think it should 
be fine as Streams is the same as other clients that it simply hook sl4j 
interface which could use any compatible logging frameworks, not only log4j.

Streams is designed to try to abstract the operational legwork from users and 
let them focus on the computational logic only, so we would really like to, if 
possible, support such information to users since as you mentioned, whether / 
what has been restored upon initialization should be a common feature which we 
'd try to figure out really hard if it is possible to support out-of-the-box 
before we give up and enforce users to always specify such logic in a callback.


was (Author: guozhang):
I see. Just for clarification for "own logging" are you just adding logging 
statement in your code and are only looking at those or you are using a 
different logging framework than log4j? For the latter case I think it should 
be fine as Streams is the same as other clients that it simply hook sl4j 
interface which could use any compatible logging frameworks, not only log4j.

Streams is designed to try to abstract the operational legwork from users and 
let them focus on the computational logic only, so we would really like to, if 
possible, support such information to users since as you mentioned, whether / 
what has been restored upon initialization should be a common feature which we 
'd better support out-of-the-box rather than enforcing users to always specify 
such logic in a callback.

> StateRestoreCallback begin and end indication
> -
>
> Key: KAFKA-4322
> URL: https://issues.apache.org/jira/browse/KAFKA-4322
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Mark Shelton
>Assignee: Mark Shelton
>Priority: Minor
>
> In Kafka Streams, the StateRestoreCallback interface provides only a single 
> method "restore(byte[] key, byte[] value)" that is called for every key-value 
> pair to be restored. 
> It would be nice to have "beginRestore" and "endRestore" methods as part of 
> StateRestoreCallback.
> Kafka Streams would call "beginRestore" before restoring any keys, and would 
> call "endRestore" when it determines that it is done. This allows an 
> implementation, for example, to report on the number of keys restored and 
> perform a commit after the last key was restored. Other uses are conceivable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4322) StateRestoreCallback begin and end indication

2016-11-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15687453#comment-15687453
 ] 

Guozhang Wang commented on KAFKA-4322:
--

I see. Just for clarification for "own logging" are you just adding logging 
statement in your code and are only looking at those or you are using a 
different logging framework than log4j? For the latter case I think it should 
be fine as Streams is the same as other clients that it simply hook sl4j 
interface which could use any compatible logging frameworks, not only log4j.

Streams is designed to try to abstract the operational legwork from users and 
let them focus on the computational logic only, so we would really like to, if 
possible, support such information to users since as you mentioned, whether / 
what has been restored upon initialization should be a common feature which we 
'd better support out-of-the-box rather than enforcing users to always specify 
such logic in a callback.

> StateRestoreCallback begin and end indication
> -
>
> Key: KAFKA-4322
> URL: https://issues.apache.org/jira/browse/KAFKA-4322
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Mark Shelton
>Assignee: Mark Shelton
>Priority: Minor
>
> In Kafka Streams, the StateRestoreCallback interface provides only a single 
> method "restore(byte[] key, byte[] value)" that is called for every key-value 
> pair to be restored. 
> It would be nice to have "beginRestore" and "endRestore" methods as part of 
> StateRestoreCallback.
> Kafka Streams would call "beginRestore" before restoring any keys, and would 
> call "endRestore" when it determines that it is done. This allows an 
> implementation, for example, to report on the number of keys restored and 
> perform a commit after the last key was restored. Other uses are conceivable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2133: KAFKA-4355: Skip topics that have no partitions

2016-11-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2133


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-93: Improve invalid timestamp handling in Kafka Streams

2016-11-22 Thread Guozhang Wang
Regarding the "compatibility" section, I would suggest being a bit more
specific about why it is a breaking change. For Streams, it could mean
different things:

1. User need code change when switching library dependency on the new
version, otherwise it won't compile(I think this is the case for this KIP).
2. User need code change when switching library dependency on the new
version, otherwise runtime exception will be thrown.
3. Existing application state as well as internal topics need to be swiped
and the program need to restart from zero.


Guozhang

On Fri, Nov 18, 2016 at 12:27 PM, Matthias J. Sax 
wrote:

> Hi all,
>
> I want to start a discussion about KIP-93:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 93%3A+Improve+invalid+timestamp+handling+in+Kafka+Streams
>
> Looking forward to your feedback.
>
>
> -Matthias
>
>


-- 
-- Guozhang


[jira] [Commented] (KAFKA-4355) StreamThread intermittently dies with "Topic not found during partition assignment" when broker restarted

2016-11-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15687421#comment-15687421
 ] 

Guozhang Wang commented on KAFKA-4355:
--

Resolving the ticket as PR 2133 is merged. [~mihbor] please feel free to 
re-open if this issue happens again in trunk.

> StreamThread intermittently dies with "Topic not found during partition 
> assignment" when broker restarted
> -
>
> Key: KAFKA-4355
> URL: https://issues.apache.org/jira/browse/KAFKA-4355
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.0.0
> Environment: kafka 0.10.0.0
> kafka 0.10.1.0
> uname -a
> Linux lp02485 4.4.0-34-generic #53~14.04.1-Ubuntu SMP Wed Jul 27 16:56:40 UTC 
> 2016 x86_64 x86_64 x86_64 GNU/Linux
> java -version
> java version "1.8.0_92"
> Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
>Reporter: Michal Borowiecki
>Assignee: Eno Thereska
>  Labels: architecture
> Fix For: 0.10.2.0
>
>
> When (a) starting kafka streams app before the broker or
> (b) restarting the broker while the streams app is running:
> the stream thread intermittently dies with "Topic not found during partition 
> assignment" StreamsException.
> This happens about between one in 5 or one in 10 times.
> Stack trace:
> {noformat}
> Exception in thread "StreamThread-2" 
> org.apache.kafka.streams.errors.StreamsException: Topic not found during 
> partition assignment: scheduler
>   at 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper.maxNumPartitions(DefaultPartitionGrouper.java:81)
>   at 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper.partitionGroups(DefaultPartitionGrouper.java:55)
>   at 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:370)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:313)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:467)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1000(AbstractCoordinator.java:88)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:419)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:395)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:742)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:722)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:186)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:149)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:116)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:479)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:316)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:256)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:308)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> {noformat}
> Our app has 2 streams in it, consuming from 2 different topics.
> Sometimes the exception happens on both stream threads. Sometimes only on one 
> of the stream threads.
> The exception is preceded by:
> {noformat}
> [2016-10-28 16:17:55,239] INFO [StreamThread-2] (Re-)joining group 
> 

[jira] [Resolved] (KAFKA-4355) StreamThread intermittently dies with "Topic not found during partition assignment" when broker restarted

2016-11-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4355.
--
   Resolution: Fixed
Fix Version/s: 0.10.2.0

> StreamThread intermittently dies with "Topic not found during partition 
> assignment" when broker restarted
> -
>
> Key: KAFKA-4355
> URL: https://issues.apache.org/jira/browse/KAFKA-4355
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.0.0
> Environment: kafka 0.10.0.0
> kafka 0.10.1.0
> uname -a
> Linux lp02485 4.4.0-34-generic #53~14.04.1-Ubuntu SMP Wed Jul 27 16:56:40 UTC 
> 2016 x86_64 x86_64 x86_64 GNU/Linux
> java -version
> java version "1.8.0_92"
> Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
>Reporter: Michal Borowiecki
>Assignee: Eno Thereska
>  Labels: architecture
> Fix For: 0.10.2.0
>
>
> When (a) starting kafka streams app before the broker or
> (b) restarting the broker while the streams app is running:
> the stream thread intermittently dies with "Topic not found during partition 
> assignment" StreamsException.
> This happens about between one in 5 or one in 10 times.
> Stack trace:
> {noformat}
> Exception in thread "StreamThread-2" 
> org.apache.kafka.streams.errors.StreamsException: Topic not found during 
> partition assignment: scheduler
>   at 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper.maxNumPartitions(DefaultPartitionGrouper.java:81)
>   at 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper.partitionGroups(DefaultPartitionGrouper.java:55)
>   at 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:370)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:313)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:467)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1000(AbstractCoordinator.java:88)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:419)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:395)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:742)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:722)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:186)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:149)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:116)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:479)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:316)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:256)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:308)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> {noformat}
> Our app has 2 streams in it, consuming from 2 different topics.
> Sometimes the exception happens on both stream threads. Sometimes only on one 
> of the stream threads.
> The exception is preceded by:
> {noformat}
> [2016-10-28 16:17:55,239] INFO [StreamThread-2] (Re-)joining group 
> pool-scheduler 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2016-10-28 

[jira] [Commented] (KAFKA-4355) StreamThread intermittently dies with "Topic not found during partition assignment" when broker restarted

2016-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15687416#comment-15687416
 ] 

ASF GitHub Bot commented on KAFKA-4355:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2133


> StreamThread intermittently dies with "Topic not found during partition 
> assignment" when broker restarted
> -
>
> Key: KAFKA-4355
> URL: https://issues.apache.org/jira/browse/KAFKA-4355
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.0.0
> Environment: kafka 0.10.0.0
> kafka 0.10.1.0
> uname -a
> Linux lp02485 4.4.0-34-generic #53~14.04.1-Ubuntu SMP Wed Jul 27 16:56:40 UTC 
> 2016 x86_64 x86_64 x86_64 GNU/Linux
> java -version
> java version "1.8.0_92"
> Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
>Reporter: Michal Borowiecki
>Assignee: Eno Thereska
>  Labels: architecture
>
> When (a) starting kafka streams app before the broker or
> (b) restarting the broker while the streams app is running:
> the stream thread intermittently dies with "Topic not found during partition 
> assignment" StreamsException.
> This happens about between one in 5 or one in 10 times.
> Stack trace:
> {noformat}
> Exception in thread "StreamThread-2" 
> org.apache.kafka.streams.errors.StreamsException: Topic not found during 
> partition assignment: scheduler
>   at 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper.maxNumPartitions(DefaultPartitionGrouper.java:81)
>   at 
> org.apache.kafka.streams.processor.DefaultPartitionGrouper.partitionGroups(DefaultPartitionGrouper.java:55)
>   at 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:370)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:313)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:467)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1000(AbstractCoordinator.java:88)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:419)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:395)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:742)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:722)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:186)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:149)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:116)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:479)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:316)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:256)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:308)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> {noformat}
> Our app has 2 streams in it, consuming from 2 different topics.
> Sometimes the exception happens on both stream threads. Sometimes only on one 
> of the stream threads.
> The exception is preceded by:
> {noformat}
> [2016-10-28 16:17:55,239] INFO [StreamThread-2] (Re-)joining group 
> pool-scheduler 
> 

[jira] [Commented] (KAFKA-4432) ProducerPerformance.java : Add support to supply custom message payloads.

2016-11-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15687106#comment-15687106
 ] 

ASF GitHub Bot commented on KAFKA-4432:
---

GitHub user SandeshKarkera opened a pull request:

https://github.com/apache/kafka/pull/2158

KAFKA-4432: Added support to supply custom message payloads to 
perf-producer script.

Current implementation of ProducerPerformance creates static payload. This 
is not very useful in testing compression or when you want to test with 
production/custom payloads. So, we decided to add support for providing payload 
file as an input to producer perf test script.

We made the following changes:
1. Added support to provide a payload file which can have the list of 
payloads that you actually want to send.
2. Moved payload generation inside the send loop for cases when payload 
file is provided.

Following are the changes to how the producer-performance is evoked:
1. You must provide "--record-size" or "--payload-file" but not both. This 
is because, record size cannot be guaranteed when you are using custom events.
  e.g. ./kafka-producer-perf-test.sh --topic test_topic --num-records 
10 --producer-props bootstrap.servers=127.0.0.1:9092 acks=0 
buffer.memory=33554432 compression.type=gzip batch.size=10240 linger.ms=10 
--throughput -1 --payload-file ./test_payloads --payload-delimiter ,
2. Earlier "--record-size" was a required config, now you must provide 
exactly one of "--record-size" or "--payload-file". Providing both will result 
in an error.
3. Support for an additional parameter "--payload-delimiter" has been added 
which defaults to "\n"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SandeshKarkera/kafka PerfProducerChanges

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2158


commit af6c6d910ae3915c769ebac326ade329a724cda4
Author: Sandesh K 
Date:   2016-11-22T11:08:03Z

KAFKA-4432: Added support to supply custom message payloads

commit df278060a04fa844f150f30ae534a310c47c0eb3
Author: Sandesh K 
Date:   2016-11-22T15:53:52Z

KAFKA-4432: Fixed checkstyle issues




> ProducerPerformance.java : Add support to supply custom message payloads.
> -
>
> Key: KAFKA-4432
> URL: https://issues.apache.org/jira/browse/KAFKA-4432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sandesh Karkera
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> Currently, kafka-producer-perf-test.sh does not support supplying custom 
> payloads to read messages from. The payload generated is a static payload 
> which is not very useful when testing features like compression. 
> We can improve the ProducerPerformance.java to accept a "-payload-file" and 
> "-payload-delimiter"(optional) parameters instead of "-record-size". This 
> will enable to user to run performance tests using custom payloads. 
> Before every send, we'll pick up a random payload from the payload-file 
> provided. 
> -payload-file: Will consists of messages delimited by "-payload-delimiter". 
> -payload-delimiter: Takes a default value of "\n" if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4432) ProducerPerformance.java : Add support to supply custom message payloads.

2016-11-22 Thread Sandesh Karkera (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandesh Karkera updated KAFKA-4432:
---
Status: Patch Available  (was: Open)

PR raised:
KAFKA-4432: Added support to supply custom message payloads to perf-producer 
script. #2158

> ProducerPerformance.java : Add support to supply custom message payloads.
> -
>
> Key: KAFKA-4432
> URL: https://issues.apache.org/jira/browse/KAFKA-4432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sandesh Karkera
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> Currently, kafka-producer-perf-test.sh does not support supplying custom 
> payloads to read messages from. The payload generated is a static payload 
> which is not very useful when testing features like compression. 
> We can improve the ProducerPerformance.java to accept a "-payload-file" and 
> "-payload-delimiter"(optional) parameters instead of "-record-size". This 
> will enable to user to run performance tests using custom payloads. 
> Before every send, we'll pick up a random payload from the payload-file 
> provided. 
> -payload-file: Will consists of messages delimited by "-payload-delimiter". 
> -payload-delimiter: Takes a default value of "\n" if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-87 - Add Compaction Tombstone Flag

2016-11-22 Thread Michael Pearce
Hi Mayuresh,

LGTM. Ive just made one small adjustment updating the wire protocol to show the 
magic byte bump.

Do we think we’re good to put to a vote? Is there any other bits needing 
discussion?

Cheers
Mike

On 21/11/2016, 18:26, "Mayuresh Gharat"  wrote:

Hi Michael,

I have updated the migration section of the KIP. Can you please take a look?

Thanks,

Mayuresh

On Fri, Nov 18, 2016 at 9:07 AM, Mayuresh Gharat  wrote:

> Hi Michael,
>
> That whilst sending tombstone and non null value, the consumer can expect
> only to receive the non-null message only in step (3) is this correct?
> ---> I do agree with you here.
>
> Becket, Ismael : can you guys review the migration plan listed above using
> magic byte?
>
> Thanks,
>
> Mayuresh
>
> On Fri, Nov 18, 2016 at 8:58 AM, Michael Pearce 
> wrote:
>
>> Many thanks for this Mayuresh. I don't have any objections.
>>
>> I assume we should state:
>>
>> That whilst sending tombstone and non null value, the consumer can expect
>> only to receive the non-null message only in step (3) is this correct?
>>
>> Cheers
>> Mike
>>
>>
>>
>> Sent using OWA for iPhone
>> 
>> From: Mayuresh Gharat 
>> Sent: Thursday, November 17, 2016 5:18:41 PM
>> To: dev@kafka.apache.org
>> Subject: Re: [DISCUSS] KIP-87 - Add Compaction Tombstone Flag
>>
>> Hi Ismael,
>>
>> Thanks for the explanation.
>> Specially I like this part where in you mentioned we can get rid of the
>> older null value support for log compaction later on, here :
>> We can't change semantics of the message format without having a long
>> transition period. And we can't rely
>> on people reading documentation or acting on a warning for something so
>> fundamental. As such, my take is that we need to bump the magic byte. The
>> good news is
>> that we don't have to support all versions forever. We have said that we
>> will support direct upgrades for 2 years. That means that message format
>> version n could, in theory, be removed 2 years after the it's introduced.
>>
>> Just a heads up, I would like to mention that even without bumping magic
>> byte, we will *NOT* loose zero copy as in the client(x+1) in my
>> explanation
>> above will convert internally a null value to have a tombstone bit set 
and
>> a tombstone bit set to have a null value automatically internally and by
>> the time we move to version (x+2), the clients would have upgraded.
>> Obviously if we support a request from consumer(x), we will loose zero
>> copy
>> but that is the same case with magic byte.
>>
>> But if magic byte bump makes life easier for transition for the above
>> reasons that you explained, I am OK with it since we are going to meet 
the
>> end goal down the road :)
>>
>> On a side note can we update the doc here on magic byte to say that "*it
>> should be bumped whenever the message format is changed or the
>> interpretation of message format (usage of the reserved bits as well) is
>> changed*".
>>
>>
>> Hi Michael,
>>
>> Here is the update plan that we discussed offline yesterday :
>>
>> Currently the magic-byte which corresponds to the 
"message.format.version"
>> is set to 1.
>>
>> 1) On broker it will be set to 1 initially.
>>
>> 2) When a producer client sends a message with magic-byte = 2, since the
>> broker is on magic-byte = 1, we will down convert it, which means if the
>> tombstone bit is set, the value will be set to null. A consumer
>> understanding magic-byte = 1, will still work with this. A consumer
>> working
>> with magic-byte =2 will also be able to understand this, since it
>> understands the tombstone.
>> Now there is still the question of supporting a non-tombstone and null
>> value from producer client with magic-byte = 2.* (I am not sure if we
>> should support this. Ismael/Becket can comment here)*
>>
>> 3) When almost all the clients have upgraded, the message.format.version
>> on
>> the broker can be changed to 2, where in the down conversion in the above
>> step will not happen. If at this point we get a consumer request from a
>> older consumer, we might have to down convert where in we loose zero 
copy,
>> but these cases should be rare.
>>
>> Becket can you review this plan and add more details if I have
>> missed/wronged something, before we put it on KIP.
>>
>> Thanks,
>>
>> Mayuresh
>>
>> On Wed, Nov 16, 2016 at 11:07 PM, Michael Pearce 
>> wrote:
>>
>> > Thanks guys, for discussing 

[GitHub] kafka pull request #2158: KAFKA-4432: Added support to supply custom message...

2016-11-22 Thread SandeshKarkera
GitHub user SandeshKarkera opened a pull request:

https://github.com/apache/kafka/pull/2158

KAFKA-4432: Added support to supply custom message payloads to 
perf-producer script.

Current implementation of ProducerPerformance creates static payload. This 
is not very useful in testing compression or when you want to test with 
production/custom payloads. So, we decided to add support for providing payload 
file as an input to producer perf test script.

We made the following changes:
1. Added support to provide a payload file which can have the list of 
payloads that you actually want to send.
2. Moved payload generation inside the send loop for cases when payload 
file is provided.

Following are the changes to how the producer-performance is evoked:
1. You must provide "--record-size" or "--payload-file" but not both. This 
is because, record size cannot be guaranteed when you are using custom events.
  e.g. ./kafka-producer-perf-test.sh --topic test_topic --num-records 
10 --producer-props bootstrap.servers=127.0.0.1:9092 acks=0 
buffer.memory=33554432 compression.type=gzip batch.size=10240 linger.ms=10 
--throughput -1 --payload-file ./test_payloads --payload-delimiter ,
2. Earlier "--record-size" was a required config, now you must provide 
exactly one of "--record-size" or "--payload-file". Providing both will result 
in an error.
3. Support for an additional parameter "--payload-delimiter" has been added 
which defaults to "\n"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SandeshKarkera/kafka PerfProducerChanges

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2158


commit af6c6d910ae3915c769ebac326ade329a724cda4
Author: Sandesh K 
Date:   2016-11-22T11:08:03Z

KAFKA-4432: Added support to supply custom message payloads

commit df278060a04fa844f150f30ae534a310c47c0eb3
Author: Sandesh K 
Date:   2016-11-22T15:53:52Z

KAFKA-4432: Fixed checkstyle issues




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1704

2016-11-22 Thread Apache Jenkins Server
See 

Changes:

[ismael] MINOR: Typo in KafkaConsumer javadoc

--
[...truncated 12141 lines...]
org.apache.kafka.common.record.RecordTest > testChecksum[190] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[190] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[190] PASSED

org.apache.kafka.common.record.RecordTest > testFields[190] STARTED

org.apache.kafka.common.record.RecordTest > testFields[190] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[191] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[191] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[191] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[191] PASSED

org.apache.kafka.common.record.RecordTest > testFields[191] STARTED

org.apache.kafka.common.record.RecordTest > testFields[191] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[0] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[0] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[1] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[1] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[2] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[2] PASSED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[3] STARTED

org.apache.kafka.common.record.KafkaLZ4Test > testKafkaLZ4[3] PASSED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testSerializationRoundtrip STARTED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testSerializationRoundtrip PASSED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testTopiPartitionSerializationCompatibility STARTED

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testTopiPartitionSerializationCompatibility PASSED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdLow STARTED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdLow PASSED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdHigh 
STARTED

org.apache.kafka.common.protocol.ApiKeysTest > testForIdWithInvalidIdHigh PASSED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionName STARTED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionName PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionDefault STARTED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionDefault PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueExceptions STARTED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueExceptions PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionInheritance 
STARTED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionInheritance PASSED

org.apache.kafka.common.protocol.ErrorsTest > testNoneException STARTED

org.apache.kafka.common.protocol.ErrorsTest > testNoneException PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueErrorCodes STARTED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueErrorCodes PASSED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionsAreNotGeneric 
STARTED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionsAreNotGeneric PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testToString 
STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testToString 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadStringSizeTooLarge STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadStringSizeTooLarge PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testNullableDefault STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testNullableDefault PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeStringSize STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeStringSize PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadArraySizeTooLarge STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadArraySizeTooLarge PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeBytesSize STARTED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testReadNegativeBytesSize PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 

[jira] [Commented] (KAFKA-4432) ProducerPerformance.java : Add support to supply custom message payloads.

2016-11-22 Thread Sandesh Karkera (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15686694#comment-15686694
 ] 

Sandesh Karkera commented on KAFKA-4432:


Note: I am working on this but don't have the rights to assign this to myself.

> ProducerPerformance.java : Add support to supply custom message payloads.
> -
>
> Key: KAFKA-4432
> URL: https://issues.apache.org/jira/browse/KAFKA-4432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sandesh Karkera
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> Currently, kafka-producer-perf-test.sh does not support supplying custom 
> payloads to read messages from. The payload generated is a static payload 
> which is not very useful when testing features like compression. 
> We can improve the ProducerPerformance.java to accept a "-payload-file" and 
> "-payload-delimiter"(optional) parameters instead of "-record-size". This 
> will enable to user to run performance tests using custom payloads. 
> Before every send, we'll pick up a random payload from the payload-file 
> provided. 
> -payload-file: Will consists of messages delimited by "-payload-delimiter". 
> -payload-delimiter: Takes a default value of "\n" if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4432) ProducerPerformance.java : Add support to supply custom message payloads.

2016-11-22 Thread Sandesh Karkera (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandesh Karkera updated KAFKA-4432:
---
Fix Version/s: (was: 0.10.0.2)
   0.10.2.0

> ProducerPerformance.java : Add support to supply custom message payloads.
> -
>
> Key: KAFKA-4432
> URL: https://issues.apache.org/jira/browse/KAFKA-4432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sandesh Karkera
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> Currently, kafka-producer-perf-test.sh does not support supplying custom 
> payloads to read messages from. The payload generated is a static payload 
> which is not very useful when testing features like compression. 
> We can improve the ProducerPerformance.java to accept a "-payload-file" and 
> "-payload-delimiter"(optional) parameters instead of "-record-size". This 
> will enable to user to run performance tests using custom payloads. 
> Before every send, we'll pick up a random payload from the payload-file 
> provided. 
> -payload-file: Will consists of messages delimited by "-payload-delimiter". 
> -payload-delimiter: Takes a default value of "\n" if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3175) topic not accessible after deletion even when delete.topic.enable is disabled

2016-11-22 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15686567#comment-15686567
 ] 

Ismael Juma commented on KAFKA-3175:


I think this was only merged to trunk, so updated the fix version to be 
0.10.2.0. Please let me know if I missed something.

> topic not accessible after deletion even when delete.topic.enable is disabled
> -
>
> Key: KAFKA-3175
> URL: https://issues.apache.org/jira/browse/KAFKA-3175
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Mayuresh Gharat
> Fix For: 0.10.2.0
>
>
> The can be reproduced with the following steps.
> 1. start ZK and 1 broker (with default delete.topic.enable=false)
> 2. create a topic test
> bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test 
> --partition 1 --replication-factor 1
> 3. delete topic test
> bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
> 4. restart the broker
> Now topic test still shows up during topic description.
> bin/kafka-topics.sh --zookeeper localhost:2181 --describe
> Topic:testPartitionCount:1ReplicationFactor:1 Configs:
>   Topic: test Partition: 0Leader: 0   Replicas: 0 Isr: 0
> However, one can't produce to this topic any more.
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
> [2016-01-29 17:55:24,527] WARN Error while fetching metadata with correlation 
> id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2016-01-29 17:55:24,725] WARN Error while fetching metadata with correlation 
> id 1 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2016-01-29 17:55:24,828] WARN Error while fetching metadata with correlation 
> id 2 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3175) topic not accessible after deletion even when delete.topic.enable is disabled

2016-11-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3175:
---
Fix Version/s: (was: 0.10.1.1)
   0.10.2.0

> topic not accessible after deletion even when delete.topic.enable is disabled
> -
>
> Key: KAFKA-3175
> URL: https://issues.apache.org/jira/browse/KAFKA-3175
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Mayuresh Gharat
> Fix For: 0.10.2.0
>
>
> The can be reproduced with the following steps.
> 1. start ZK and 1 broker (with default delete.topic.enable=false)
> 2. create a topic test
> bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test 
> --partition 1 --replication-factor 1
> 3. delete topic test
> bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
> 4. restart the broker
> Now topic test still shows up during topic description.
> bin/kafka-topics.sh --zookeeper localhost:2181 --describe
> Topic:testPartitionCount:1ReplicationFactor:1 Configs:
>   Topic: test Partition: 0Leader: 0   Replicas: 0 Isr: 0
> However, one can't produce to this topic any more.
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
> [2016-01-29 17:55:24,527] WARN Error while fetching metadata with correlation 
> id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2016-01-29 17:55:24,725] WARN Error while fetching metadata with correlation 
> id 1 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2016-01-29 17:55:24,828] WARN Error while fetching metadata with correlation 
> id 2 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2157: Typo in Javadoc

2016-11-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2157


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4373) Kafka Consumer API jumping offsets

2016-11-22 Thread Conor Hughes (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15686394#comment-15686394
 ] 

Conor Hughes commented on KAFKA-4373:
-

Thanks for such a the quick reply!

After you mentioned log retention policy I went back through my kafka 
configurations and noticed log.retention.hours was set back to the default 7 
days (168 hours).

The trace files I was using were from over a month ago so messages were being 
deleted every 5 minutes as the were older than 7 days. (Note: I'm using a 
custom timestamp extractor)

> Kafka Consumer API jumping offsets
> --
>
> Key: KAFKA-4373
> URL: https://issues.apache.org/jira/browse/KAFKA-4373
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Srinivasan Venkatraman
>
> Hi,
> I am using Kafka Version 0.10.0.1 and java consumer API to consume messages 
> from a topic. We are using a single node kafka and zookeeper. It is sometimes 
> observed that the consumer is losing a bulk of message. We are unable to find 
> the exact reason to replicate the issue.
> The scenario is:
> Consumer polls the topic.
> Fetches the messages and gives it to a thread pool to handle the message.
> Waits for the threads to return and then commits the offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2157: Typo in Javadoc

2016-11-22 Thread astubbs
GitHub user astubbs opened a pull request:

https://github.com/apache/kafka/pull/2157

Typo in Javadoc



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/astubbs/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2157.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2157


commit a991106d1c3fe354d96da7bbaca6294a792d5bda
Author: Antony Stubbs 
Date:   2016-11-22T10:01:34Z

Typo in Javadoc




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4432) ProducerPerformance.java : Add support to supply custom message payloads.

2016-11-22 Thread Sandesh Karkera (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandesh Karkera updated KAFKA-4432:
---
Description: 
Currently, kafka-producer-perf-test.sh does not support supplying custom 
payloads to read messages from. The payload generated is a static payload which 
is not very useful when testing features like compression. 

We can improve the ProducerPerformance.java to accept a "-payload-file" and 
"-payload-delimiter"(optional) parameters instead of "-record-size". This will 
enable to user to run performance tests using custom payloads. 

Before every send, we'll pick up a random payload from the payload-file 
provided. 

-payload-file: Will consists of messages delimited by "--payload-delimiter". 
-payload-delimiter: Takes a default value of "\n" if not specified.

  was:
Currently, kafka-producer-perf-test.sh does not support supplying custom 
payloads to read messages from. The payload generated is a static payload which 
is not very useful when testing features like compression. 

We can improve the ProducerPerformance.java to accept a "--payload-file" and 
"--payload-delimiter"(optional) parameters instead of "--record-size". This 
will enable to user to run performance tests using custom payloads. 

Before every send, we'll pick up a random payload from the payload-file 
provided. 

--payload-file: Will consists of messages delimited by "--payload-delimiter". 
--payload-delimiter: Takes a default value of "\n" if not specified.


> ProducerPerformance.java : Add support to supply custom message payloads.
> -
>
> Key: KAFKA-4432
> URL: https://issues.apache.org/jira/browse/KAFKA-4432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sandesh Karkera
>Priority: Minor
> Fix For: 0.10.0.2
>
>
> Currently, kafka-producer-perf-test.sh does not support supplying custom 
> payloads to read messages from. The payload generated is a static payload 
> which is not very useful when testing features like compression. 
> We can improve the ProducerPerformance.java to accept a "-payload-file" and 
> "-payload-delimiter"(optional) parameters instead of "-record-size". This 
> will enable to user to run performance tests using custom payloads. 
> Before every send, we'll pick up a random payload from the payload-file 
> provided. 
> -payload-file: Will consists of messages delimited by "--payload-delimiter". 
> -payload-delimiter: Takes a default value of "\n" if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4432) ProducerPerformance.java : Add support to supply custom message payloads.

2016-11-22 Thread Sandesh Karkera (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandesh Karkera updated KAFKA-4432:
---
Description: 
Currently, kafka-producer-perf-test.sh does not support supplying custom 
payloads to read messages from. The payload generated is a static payload which 
is not very useful when testing features like compression. 

We can improve the ProducerPerformance.java to accept a "-payload-file" and 
"-payload-delimiter"(optional) parameters instead of "-record-size". This will 
enable to user to run performance tests using custom payloads. 

Before every send, we'll pick up a random payload from the payload-file 
provided. 

-payload-file: Will consists of messages delimited by "-payload-delimiter". 
-payload-delimiter: Takes a default value of "\n" if not specified.

  was:
Currently, kafka-producer-perf-test.sh does not support supplying custom 
payloads to read messages from. The payload generated is a static payload which 
is not very useful when testing features like compression. 

We can improve the ProducerPerformance.java to accept a "-payload-file" and 
"-payload-delimiter"(optional) parameters instead of "-record-size". This will 
enable to user to run performance tests using custom payloads. 

Before every send, we'll pick up a random payload from the payload-file 
provided. 

-payload-file: Will consists of messages delimited by "--payload-delimiter". 
-payload-delimiter: Takes a default value of "\n" if not specified.


> ProducerPerformance.java : Add support to supply custom message payloads.
> -
>
> Key: KAFKA-4432
> URL: https://issues.apache.org/jira/browse/KAFKA-4432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sandesh Karkera
>Priority: Minor
> Fix For: 0.10.0.2
>
>
> Currently, kafka-producer-perf-test.sh does not support supplying custom 
> payloads to read messages from. The payload generated is a static payload 
> which is not very useful when testing features like compression. 
> We can improve the ProducerPerformance.java to accept a "-payload-file" and 
> "-payload-delimiter"(optional) parameters instead of "-record-size". This 
> will enable to user to run performance tests using custom payloads. 
> Before every send, we'll pick up a random payload from the payload-file 
> provided. 
> -payload-file: Will consists of messages delimited by "-payload-delimiter". 
> -payload-delimiter: Takes a default value of "\n" if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4432) ProducerPerformance.java : Add support to supply custom message payloads.

2016-11-22 Thread Sandesh Karkera (JIRA)
Sandesh Karkera created KAFKA-4432:
--

 Summary: ProducerPerformance.java : Add support to supply custom 
message payloads.
 Key: KAFKA-4432
 URL: https://issues.apache.org/jira/browse/KAFKA-4432
 Project: Kafka
  Issue Type: Improvement
Reporter: Sandesh Karkera
Priority: Minor
 Fix For: 0.10.0.2


Currently, kafka-producer-perf-test.sh does not support supplying custom 
payloads to read messages from. The payload generated is a static payload which 
is not very useful when testing features like compression. 

We can improve the ProducerPerformance.java to accept a "--payload-file" and 
"--payload-delimiter"(optional) parameters instead of "--record-size". This 
will enable to user to run performance tests using custom payloads. 

Before every send, we'll pick up a random payload from the payload-file 
provided. 

--payload-file: Will consists of messages delimited by "--payload-delimiter". 
--payload-delimiter: Takes a default value of "\n" if not specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2154: HOTFIX: Increased wait time

2016-11-22 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/2154


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---