Build failed in Jenkins: kafka-trunk-jdk8 #1390

2017-03-30 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update possible errors in OffsetFetchResponse

[wangguoz] KAFKA-4980: testReprocessingFromScratch unit test failure

[wangguoz] MINOR: reduce amount of verbose printing

[ismael] MINOR: Ensure streaming iterator is closed by Fetcher

[jason] MINOR: Vagrant provisioning fixes

[wangguoz] KAFKA-4791: unable to add state store with regex matched topics

[ismael] KAFKA-4689; Disable system tests for consumer hard failures

[wangguoz] MINOR: Increase max.poll time for streams consumers

--
[...truncated 754.08 KB...]

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformAllQueriesWithCachingDisabled STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformAllQueriesWithCachingDisabled PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformRangeQueriesWithCachingDisabled STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformRangeQueriesWithCachingDisabled PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.KeyValueTest > shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.KeyValueTest > shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.KafkaStreamsTest > testIllegalMetricsConfig STARTED

org.apache.kafka.streams.KafkaStreamsTest > testIllegalMetricsConfig PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldNotGetAllTasksWhenNotRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldNotGetAllTasksWhenNotRunning 
PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
testInitializesAndDestroysMetricsReporters STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
testInitializesAndDestroysMetricsReporters PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndPartitionerWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndPartitionerWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testLegalMetricsConfig STARTED

org.apache.kafka.streams.KafkaStreamsTest > testLegalMetricsConfig PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndSerializerWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndSerializerWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldReturnFalseOnCloseWhenThreadsHaventTerminated STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldReturnFalseOnCloseWhenThreadsHaventTerminated PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetAllTasksWithStoreWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetAllTasksWithStoreWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.Ka

[GitHub] kafka pull request #2774: MINOR: StreamThread should catch InvalidTopicExcep...

2017-03-30 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2774

MINOR: StreamThread should catch InvalidTopicException



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka minor-fix-reset-0102

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2774.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2774


commit 3aa634186335c6143de458daef879983678a1329
Author: Matthias J. Sax 
Date:   2017-03-28T01:01:55Z

MINOR: StreamThread should catch InvalidTopicException




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4987) Topic creation allows invalid config values on running brokers

2017-03-30 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15950237#comment-15950237
 ] 

huxi commented on KAFKA-4987:
-

Did you use a higher version of client talking to a lower version of broker?

> Topic creation allows invalid config values on running brokers
> --
>
> Key: KAFKA-4987
> URL: https://issues.apache.org/jira/browse/KAFKA-4987
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1, 0.10.1.0
>Reporter: dan norwood
>
> we use kip4 capabilities to make a `CreateTopicsRequest` for our topics. one 
> of the configs we use is `cleanup.policy=compact, delete`. this was 
> inadvertently run against a cluster that does not support that policy. the 
> result was that the topic was created, however on subsequent broker bounce 
> the broker fails to start up
> {code}
> [2017-03-23 00:00:44,837] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.config.ConfigException: Invalid value compact,delete 
> for configuration cleanup.policy: String must be one of: compact, delete
>   at 
> org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:827)
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:427)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:55)
>   at kafka.log.LogConfig.(LogConfig.scala:56)
>   at kafka.log.LogConfig$.fromProps(LogConfig.scala:192)
>   at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:598)
>   at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:597)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:597)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:183)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
>   at kafka.Kafka$.main(Kafka.scala:67)
>   at kafka.Kafka.main(Kafka.scala)
> [2017-03-23 00:00:44,839] INFO shutting down (kafka.server.KafkaServer)
> [2017-03-23 00:00:44,844] INFO shut down completed (kafka.server.KafkaServer)
> {code}
> i believe that the broker should fail when given an invalid config during 
> topic creation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Onur Karaman
It seems like there generally has been the assumption that the broker needs
to know about this delay either from its own config or provided over the
wire from clients. Is this actually true?

One alternative I don't think was mentioned was to make this delay concept
be completely client-side.

What I mean by this is maybe we can just locally postpone on the client any
message consumption or callback trigger from happening for the first N
seconds after rebalance. The client in the meantime can heartbeat and get
notified of rebalances during this delay.

I haven't really thought it through in detail. Please let me know if I'm
missing something.

On Thu, Mar 30, 2017 at 6:03 PM, Dong Lin  wrote:

> +1 (non-binding)
>
> Thanks!
>
>
> On Thu, Mar 30, 2017 at 6:03 PM, Becket Qin  wrote:
>
> > +1 Thanks for the KIP!
> >
> > On Thu, Mar 30, 2017 at 12:55 PM, Jason Gustafson 
> > wrote:
> >
> > > +1 Thanks for the KIP!
> > >
> > > On Thu, Mar 30, 2017 at 12:51 PM, Guozhang Wang 
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Sorry about the previous email, Gmail seems be collapsing them into a
> > > > single thread on my inbox.
> > > >
> > > > Guozhang
> > > >
> > > > On Thu, Mar 30, 2017 at 11:34 AM, Guozhang Wang 
> > > > wrote:
> > > >
> > > > > Damian, could you create a new thread for the voting process?
> > > > >
> > > > > Thanks!
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Thu, Mar 30, 2017 at 10:33 AM, Bill Bejeck 
> > > wrote:
> > > > >
> > > > >> +1(non-binding)
> > > > >>
> > > > >> On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska <
> > eno.there...@gmail.com
> > > >
> > > > >> wrote:
> > > > >>
> > > > >> > +1 (non binding)
> > > > >> >
> > > > >> > Thanks
> > > > >> > Eno
> > > > >> > > On 30 Mar 2017, at 18:01, Matthias J. Sax <
> > matth...@confluent.io>
> > > > >> wrote:
> > > > >> > >
> > > > >> > > +1
> > > > >> > >
> > > > >> > > On 3/30/17 3:46 AM, Damian Guy wrote:
> > > > >> > >> Hi All,
> > > > >> > >>
> > > > >> > >> I'd like to start the voting thread on KIP-134:
> > > > >> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > >> > 134%3A+Delay+initial+consumer+group+rebalance
> > > > >> > >>
> > > > >> > >> Thanks,
> > > > >> > >> Damian
> > > > >> > >>
> > > > >> > >
> > > > >> >
> > > > >> >
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-0.10.2-jdk7 #116

2017-03-30 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-4689; Disable system tests for consumer hard failures

[wangguoz] MINOR: Increase max.poll time for streams consumers

--
[...truncated 257.99 KB...]

kafka.api.PlaintextConsumerTest > testPerPartitionLagWithMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset STARTED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst STARTED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets STARTED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata STARTED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups STARTED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testListAllBrokerVersionInfo STARTED

kafka.api.AdminClientTest > testListAllBrokerVersionInfo PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.AdminClientTest > testGetConsumerGroupSummary STARTED

kafka.api.AdminClientTest > testGetConsumerGroupSummary PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero STARTED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testClose STARTED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush STARTED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

ka

Jenkins build is back to normal : kafka-trunk-jdk7 #2058

2017-03-30 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Dong Lin
+1 (non-binding)

Thanks!


On Thu, Mar 30, 2017 at 6:03 PM, Becket Qin  wrote:

> +1 Thanks for the KIP!
>
> On Thu, Mar 30, 2017 at 12:55 PM, Jason Gustafson 
> wrote:
>
> > +1 Thanks for the KIP!
> >
> > On Thu, Mar 30, 2017 at 12:51 PM, Guozhang Wang 
> > wrote:
> >
> > > +1
> > >
> > > Sorry about the previous email, Gmail seems be collapsing them into a
> > > single thread on my inbox.
> > >
> > > Guozhang
> > >
> > > On Thu, Mar 30, 2017 at 11:34 AM, Guozhang Wang 
> > > wrote:
> > >
> > > > Damian, could you create a new thread for the voting process?
> > > >
> > > > Thanks!
> > > >
> > > > Guozhang
> > > >
> > > > On Thu, Mar 30, 2017 at 10:33 AM, Bill Bejeck 
> > wrote:
> > > >
> > > >> +1(non-binding)
> > > >>
> > > >> On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska <
> eno.there...@gmail.com
> > >
> > > >> wrote:
> > > >>
> > > >> > +1 (non binding)
> > > >> >
> > > >> > Thanks
> > > >> > Eno
> > > >> > > On 30 Mar 2017, at 18:01, Matthias J. Sax <
> matth...@confluent.io>
> > > >> wrote:
> > > >> > >
> > > >> > > +1
> > > >> > >
> > > >> > > On 3/30/17 3:46 AM, Damian Guy wrote:
> > > >> > >> Hi All,
> > > >> > >>
> > > >> > >> I'd like to start the voting thread on KIP-134:
> > > >> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> > 134%3A+Delay+initial+consumer+group+rebalance
> > > >> > >>
> > > >> > >> Thanks,
> > > >> > >> Damian
> > > >> > >>
> > > >> > >
> > > >> >
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Becket Qin
+1 Thanks for the KIP!

On Thu, Mar 30, 2017 at 12:55 PM, Jason Gustafson 
wrote:

> +1 Thanks for the KIP!
>
> On Thu, Mar 30, 2017 at 12:51 PM, Guozhang Wang 
> wrote:
>
> > +1
> >
> > Sorry about the previous email, Gmail seems be collapsing them into a
> > single thread on my inbox.
> >
> > Guozhang
> >
> > On Thu, Mar 30, 2017 at 11:34 AM, Guozhang Wang 
> > wrote:
> >
> > > Damian, could you create a new thread for the voting process?
> > >
> > > Thanks!
> > >
> > > Guozhang
> > >
> > > On Thu, Mar 30, 2017 at 10:33 AM, Bill Bejeck 
> wrote:
> > >
> > >> +1(non-binding)
> > >>
> > >> On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska  >
> > >> wrote:
> > >>
> > >> > +1 (non binding)
> > >> >
> > >> > Thanks
> > >> > Eno
> > >> > > On 30 Mar 2017, at 18:01, Matthias J. Sax 
> > >> wrote:
> > >> > >
> > >> > > +1
> > >> > >
> > >> > > On 3/30/17 3:46 AM, Damian Guy wrote:
> > >> > >> Hi All,
> > >> > >>
> > >> > >> I'd like to start the voting thread on KIP-134:
> > >> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > 134%3A+Delay+initial+consumer+group+rebalance
> > >> > >>
> > >> > >> Thanks,
> > >> > >> Damian
> > >> > >>
> > >> > >
> > >> >
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #1389

2017-03-30 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Map `mkString` format updated to default java format

[ismael] KAFKA-4902; Utils#delete should correctly handle I/O errors and 
symlinks

[ismael] MINOR: Doc change related to ZK sasl configs

[ismael] MINOR: Fix typos in javadoc and code comments

[becket.qin] KAFKA-4973; Fix transient failure of

--
[...truncated 752.42 KB...]

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformAllQueriesWithCachingDisabled STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformAllQueriesWithCachingDisabled PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformRangeQueriesWithCachingDisabled STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformRangeQueriesWithCachingDisabled PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.KeyValueTest > shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.KeyValueTest > shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.KafkaStreamsTest > testIllegalMetricsConfig STARTED

org.apache.kafka.streams.KafkaStreamsTest > testIllegalMetricsConfig PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldNotGetAllTasksWhenNotRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldNotGetAllTasksWhenNotRunning 
PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
testInitializesAndDestroysMetricsReporters STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
testInitializesAndDestroysMetricsReporters PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndPartitionerWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndPartitionerWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testLegalMetricsConfig STARTED

org.apache.kafka.streams.KafkaStreamsTest > testLegalMetricsConfig PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndSerializerWhenNotRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldNotGetTaskWithKeyAndSerializerWhenNotRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldReturnFalseOnClo

[GitHub] kafka pull request #2756: KAFKA-4818: Implement transactional producer

2017-03-30 Thread mjsax
Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/2756


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4818) Implement transactional producer

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15950098#comment-15950098
 ] 

ASF GitHub Bot commented on KAFKA-4818:
---

Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/2756


> Implement transactional producer
> 
>
> Key: KAFKA-4818
> URL: https://issues.apache.org/jira/browse/KAFKA-4818
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> This covers the implementation of the transaction coordinator and the changes 
> to the producer and consumer to support transactions. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (KAFKA-4842) Streams integration tests occasionally fail with connection error

2017-03-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-4842:
--

Re-open the issue and assigning to [~mjsax]

> Streams integration tests occasionally fail with connection error
> -
>
> Key: KAFKA-4842
> URL: https://issues.apache.org/jira/browse/KAFKA-4842
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
> Fix For: 0.11.0.0
>
>
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[0] FAILED  java.lang.IllegalStateException: No entry found 
> for connection 0. In ClusterConnectionStates.java: node state. This happens 
> locally. Happened to KStreamAggregationIntegrationTest.java.shouldReduce() 
> once too. 
> Not clear this is streams related. Also, it's hard to reproduce since it 
> doesn't happen all the time. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4842) Streams integration tests occasionally fail with connection error

2017-03-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-4842:


Assignee: Matthias J. Sax

> Streams integration tests occasionally fail with connection error
> -
>
> Key: KAFKA-4842
> URL: https://issues.apache.org/jira/browse/KAFKA-4842
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[0] FAILED  java.lang.IllegalStateException: No entry found 
> for connection 0. In ClusterConnectionStates.java: node state. This happens 
> locally. Happened to KStreamAggregationIntegrationTest.java.shouldReduce() 
> once too. 
> Not clear this is streams related. Also, it's hard to reproduce since it 
> doesn't happen all the time. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk7 #2057

2017-03-30 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update possible errors in OffsetFetchResponse

--
[...truncated 900.84 KB...]
org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable 
STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable 
PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKTable PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic FAILED
org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
already exists.

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithZeroByteCache STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithZeroByteCache PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithNonZeroByteCache STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegionWithNonZeroByteCache PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate 

[jira] [Created] (KAFKA-4990) Add API stubs, config parameters, and request types

2017-03-30 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-4990:
--

 Summary: Add API stubs, config parameters, and request types
 Key: KAFKA-4990
 URL: https://issues.apache.org/jira/browse/KAFKA-4990
 Project: Kafka
  Issue Type: Sub-task
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4989) Improve assertions for consumer hard failure system tests to handle possible event loss

2017-03-30 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4989:
--

 Summary: Improve assertions for consumer hard failure system tests 
to handle possible event loss
 Key: KAFKA-4989
 URL: https://issues.apache.org/jira/browse/KAFKA-4989
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Jason Gustafson


We've disabled the system tests which verify consumer semantics during hard 
failures in KAFKA-4689. The problem is that we may miss events from the 
consumer when the consumer is shutdown with a {{kill -9}}. The test assertions 
probably need to be weakened to allow for this possibility before we can 
re-enable them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-120: Cleanup Kafka Streams builder API

2017-03-30 Thread Guozhang Wang
+1.


On Thu, Mar 30, 2017 at 1:18 AM, Damian Guy  wrote:

> Thanks Matthias.
>
> +1
>
> On Thu, 23 Mar 2017 at 22:40 Matthias J. Sax 
> wrote:
>
> > Hi,
> >
> > I would like to start the VOTE on KIP-120:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-120%
> 3A+Cleanup+Kafka+Streams+builder+API
> >
> > If you have further comments, please reply to the DISCUSS thread.
> >
> > Thanks a lot!
> >
> >
> > -Matthias
> >
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #2770: MINOR: Increase max.poll time for streams consumer...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2770


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4689) OffsetValidationTest fails validation with "Current position greater than the total number of consumed records"

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15950018#comment-15950018
 ] 

ASF GitHub Bot commented on KAFKA-4689:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2771


> OffsetValidationTest fails validation with "Current position greater than the 
> total number of consumed records"
> ---
>
> Key: KAFKA-4689
> URL: https://issues.apache.org/jira/browse/KAFKA-4689
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, system tests
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>  Labels: system-test-failure
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> {quote}
> 
> test_id:
> kafkatest.tests.client.consumer_test.OffsetValidationTest.test_consumer_bounce.clean_shutdown=False.bounce_mode=all
> status: FAIL
> run time:   1 minute 49.834 seconds
> Current position greater than the total number of consumed records
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/client/consumer_test.py",
>  line 157, in test_consumer_bounce
> "Current position greater than the total number of consumed records"
> AssertionError: Current position greater than the total number of consumed 
> records
> {quote}
> See also 
> https://issues.apache.org/jira/browse/KAFKA-3513?focusedCommentId=15791790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15791790
>  which is another instance of this bug, which indicates the issue goes back 
> at least as far as 1/17/2017. Note that I don't think we've seen this in 
> 0.10.1 yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-4689) OffsetValidationTest fails validation with "Current position greater than the total number of consumed records"

2017-03-30 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4689.

   Resolution: Fixed
Fix Version/s: 0.10.2.1
   0.11.0.0

Issue resolved by pull request 2771
[https://github.com/apache/kafka/pull/2771]

> OffsetValidationTest fails validation with "Current position greater than the 
> total number of consumed records"
> ---
>
> Key: KAFKA-4689
> URL: https://issues.apache.org/jira/browse/KAFKA-4689
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, system tests
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>  Labels: system-test-failure
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> {quote}
> 
> test_id:
> kafkatest.tests.client.consumer_test.OffsetValidationTest.test_consumer_bounce.clean_shutdown=False.bounce_mode=all
> status: FAIL
> run time:   1 minute 49.834 seconds
> Current position greater than the total number of consumed records
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/client/consumer_test.py",
>  line 157, in test_consumer_bounce
> "Current position greater than the total number of consumed records"
> AssertionError: Current position greater than the total number of consumed 
> records
> {quote}
> See also 
> https://issues.apache.org/jira/browse/KAFKA-3513?focusedCommentId=15791790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15791790
>  which is another instance of this bug, which indicates the issue goes back 
> at least as far as 1/17/2017. Note that I don't think we've seen this in 
> 0.10.1 yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2771: KAFKA-4689: Disable system tests for consumer hard...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2771


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4791) Kafka Streams - unable to add state stores when using wildcard topics on the source

2017-03-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4791:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.1
   0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2618
[https://github.com/apache/kafka/pull/2618]

> Kafka Streams - unable to add state stores when using wildcard topics on the 
> source
> ---
>
> Key: KAFKA-4791
> URL: https://issues.apache.org/jira/browse/KAFKA-4791
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1, 0.10.2.0
> Environment: Java 8
>Reporter: Bart Vercammen
>Assignee: Bill Bejeck
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> I'm trying to build up a topology (using TopologyBuilder) with following 
> components :
> {code}
> new TopologyBuilder()
>   .addSource("ingest", Pattern.compile( ... ))
>   .addProcessor("myprocessor", ..., "ingest")
>   .addStateStore(dataStore, "myprocessor")
> {code}
> Somehow this does not seem to work.
> When creating the topology with exact topic names, all works fine, but it 
> seems not possible to attach state stores when using wildcard topics on the 
> sources.
> Inside {{addStateStore}}, the processor gets connected to the state store 
> with {{connectProcessorAndStateStore}}, and there it will try to connect the 
> state store with the source topics from the processor: 
> {{connectStateStoreNameToSourceTopics}}  
> Here lies the problem: 
> {code}
> private Set findSourceTopicsForProcessorParents(String [] 
> parents) {
> final Set sourceTopics = new HashSet<>();
> for (String parent : parents) {
> NodeFactory nodeFactory = nodeFactories.get(parent);
> if (nodeFactory instanceof SourceNodeFactory) {
> sourceTopics.addAll(Arrays.asList(((SourceNodeFactory) 
> nodeFactory).getTopics()));
> } else if (nodeFactory instanceof ProcessorNodeFactory) {
> 
> sourceTopics.addAll(findSourceTopicsForProcessorParents(((ProcessorNodeFactory)
>  nodeFactory).parents));
> }
> }
> return sourceTopics;
> }
> {code}
> The call to {{sourceTopics.addAll(Arrays.asList(((SourceNodeFactory) 
> nodeFactory).getTopics()))}} will fail as there are no topics inside the 
> {{SourceNodeFactory}} object, only a pattern ({{.getTopics}} returns null)
> I also tried to search for some unit tests inside the Kafka Streams project 
> that cover this scenario, but alas, I was not able to find any.
> Only some tests on state stores with exact topic names, and some tests on 
> wildcard topics, but no combination of both ...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4791) Kafka Streams - unable to add state stores when using wildcard topics on the source

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15950005#comment-15950005
 ] 

ASF GitHub Bot commented on KAFKA-4791:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2618


> Kafka Streams - unable to add state stores when using wildcard topics on the 
> source
> ---
>
> Key: KAFKA-4791
> URL: https://issues.apache.org/jira/browse/KAFKA-4791
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1, 0.10.2.0
> Environment: Java 8
>Reporter: Bart Vercammen
>Assignee: Bill Bejeck
>
> I'm trying to build up a topology (using TopologyBuilder) with following 
> components :
> {code}
> new TopologyBuilder()
>   .addSource("ingest", Pattern.compile( ... ))
>   .addProcessor("myprocessor", ..., "ingest")
>   .addStateStore(dataStore, "myprocessor")
> {code}
> Somehow this does not seem to work.
> When creating the topology with exact topic names, all works fine, but it 
> seems not possible to attach state stores when using wildcard topics on the 
> sources.
> Inside {{addStateStore}}, the processor gets connected to the state store 
> with {{connectProcessorAndStateStore}}, and there it will try to connect the 
> state store with the source topics from the processor: 
> {{connectStateStoreNameToSourceTopics}}  
> Here lies the problem: 
> {code}
> private Set findSourceTopicsForProcessorParents(String [] 
> parents) {
> final Set sourceTopics = new HashSet<>();
> for (String parent : parents) {
> NodeFactory nodeFactory = nodeFactories.get(parent);
> if (nodeFactory instanceof SourceNodeFactory) {
> sourceTopics.addAll(Arrays.asList(((SourceNodeFactory) 
> nodeFactory).getTopics()));
> } else if (nodeFactory instanceof ProcessorNodeFactory) {
> 
> sourceTopics.addAll(findSourceTopicsForProcessorParents(((ProcessorNodeFactory)
>  nodeFactory).parents));
> }
> }
> return sourceTopics;
> }
> {code}
> The call to {{sourceTopics.addAll(Arrays.asList(((SourceNodeFactory) 
> nodeFactory).getTopics()))}} will fail as there are no topics inside the 
> {{SourceNodeFactory}} object, only a pattern ({{.getTopics}} returns null)
> I also tried to search for some unit tests inside the Kafka Streams project 
> that cover this scenario, but alas, I was not able to find any.
> Only some tests on state stores with exact topic names, and some tests on 
> wildcard topics, but no combination of both ...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2618: KAFKA-4791: unable to add state store with regex m...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2618


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-03-30 Thread Dong Lin
Thanks Jun!

Hi all,

Thanks for all the comments. I am going to open the voting thread if there
is no further concern with the KIP.

Dong

On Thu, Mar 30, 2017 at 3:19 PM, Jun Rao  wrote:

> Hi, Dong,
>
> I don't have further concerns. If there are no more comments from other
> people, we can start the vote.
>
> Thanks,
>
> Jun
>
> On Thu, Mar 30, 2017 at 10:59 AM, Dong Lin  wrote:
>
> > Hey Jun,
> >
> > Thanks much for the comment! Do you think we start vote for KIP-112 and
> > KIP-113 if there is no further concern?
> >
> > Dong
> >
> > On Thu, Mar 30, 2017 at 10:40 AM, Jun Rao  wrote:
> >
> > > Hi, Dong,
> > >
> > > Ok, so it seems that in solution (2), if the tool exits successfully,
> > then
> > > we know for sure that all replicas will be in the right log dirs.
> > Solution
> > > (1) doesn't guarantee that. That seems better and we can go with your
> > > current solution then.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Fri, Mar 24, 2017 at 4:28 PM, Dong Lin  wrote:
> > >
> > > > Hey Jun,
> > > >
> > > > No.. the current approach describe in the KIP (see here
> > > >  > > > 3A+Support+replicas+movement+between+log+directories#KIP-
> > > > 113:Supportreplicasmovementbetweenlogdirectories-2)Howtoreas
> > > > signreplicabetweenlogdirectoriesacrossbrokers>)
> > > > also sends ChangeReplicaDirRequest before writing reassignment path
> in
> > > ZK.
> > > > I think we discussing whether ChangeReplicaDirResponse (1) shows
> > success
> > > or
> > > > (2) should specify ReplicaNotAvailableException, if replica has not
> > been
> > > > created yet.
> > > >
> > > > Since both solution will send ChangeReplicaDirRequest before writing
> > > > reassignment in ZK, their chance of creating replica in the right
> > > directory
> > > > is the same.
> > > >
> > > > To take care of the rarer case that some brokers go down immediately
> > > after
> > > > the reassignment tool is run, solution (1) requires reassignment tool
> > to
> > > > repeatedly send DescribeDirRequest and ChangeReplicaDirRequest, while
> > > > solution (1) requires tool to only retry ChangeReplicaDirRequest if
> the
> > > > response says ReplicaNotAvailableException. It seems that solution
> (2)
> > is
> > > > cleaner because ChangeReplicaDirRequest won't depend on
> > > DescribeDirRequest.
> > > > What do you think?
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > >
> > > > On Fri, Mar 24, 2017 at 3:56 PM, Jun Rao  wrote:
> > > >
> > > > > Hi, Dong,
> > > > >
> > > > > We are just comparing whether it's better for the reassignment tool
> > to
> > > > > send ChangeReplicaDirRequest
> > > > > (1) before or (2) after writing the reassignment path in ZK .
> > > > >
> > > > > In the case when all brokers are alive when the reassignment tool
> is
> > > run,
> > > > > (1) guarantees 100% that the new replicas will be in the right log
> > dirs
> > > > and
> > > > > (2) can't.
> > > > >
> > > > > In the rarer case that some brokers go down immediately after the
> > > > > reassignment tool is run, in either approach, there is a chance
> when
> > > the
> > > > > failed broker comes back, it will complete the pending reassignment
> > > > process
> > > > > by putting some replicas in the wrong log dirs.
> > > > >
> > > > > Implementation wise, (1) and (2) seem to be the same. So, it seems
> to
> > > me
> > > > > that (1) is better?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > > On Thu, Mar 23, 2017 at 11:54 PM, Dong Lin 
> > > wrote:
> > > > >
> > > > > > Hey Jun,
> > > > > >
> > > > > > Thanks much for the response! I agree with you that if multiple
> > > > replicas
> > > > > > are created in the wrong directory, we may waste resource if
> either
> > > > > > replicaMoveThread number is low or intra.broker.throttled.rate is
> > > slow.
> > > > > > Then the question is whether the suggested approach increases the
> > > > chance
> > > > > of
> > > > > > replica being created in the correct log directory.
> > > > > >
> > > > > > I think the answer is no due to the argument provided in the
> > previous
> > > > > > email. Sending ChangeReplicaDirRequest before updating znode has
> > > > > negligible
> > > > > > impact on the chance that the broker processes
> > > ChangeReplicaDirRequest
> > > > > > before LeaderAndIsrRequest from controller. If we still worry
> about
> > > the
> > > > > > order they are sent, the reassignment tool can first send
> > > > > > ChangeReplicaDirRequest (so that broker remembers it in memory),
> > > create
> > > > > > reassignment znode, and then retry ChangeReplicaDirRequset if the
> > > > > previous
> > > > > > ChangeReplicaDirResponse says the replica has not been created.
> > This
> > > > > should
> > > > > > give us the highest possible chance of creating replica in the
> > > correct
> > > > > > directory and avoid the problem of the suggested approach. I have
> > > > updated
> > > > > > "How
> > > > > > to reassign replica betwee

Re: [DISCUSS] KIP-112: Handle disk failure for JBOD

2017-03-30 Thread Dong Lin
Hi all,

Thanks for all the comments. I am going to open the voting thread if there
is no further concern with the KIP.

Dong

On Wed, Mar 15, 2017 at 5:25 PM, Ismael Juma  wrote:

> Thanks for the updates Dong, they look good to me.
>
> Ismael
>
> On Wed, Mar 15, 2017 at 5:50 PM, Dong Lin  wrote:
>
> > Hey Ismael,
> >
> > Sure, I have updated "Changes in Operational Procedures" section in
> KIP-113
> > to specify the problem and solution with known disk failure. And I
> updated
> > the "Test Plan" section to note that we have test in KIP-113 to verify
> that
> > replicas already created on the good log directories will not be affected
> > by failure of other log directories.
> >
> > Please let me know if there is any other improvement I can make. Thanks
> for
> > your comment.
> >
> > Dong
> >
> >
> > On Wed, Mar 15, 2017 at 3:18 AM, Ismael Juma  wrote:
> >
> > > Hi Dong,
> > >
> > > Yes, that sounds good to me. I'd list option 2 first since that is safe
> > > and, as you said, no worse than what happens today. The file approach
> is
> > a
> > > bit hacky as you said, so it may be a bit fragile. Not sure if we
> really
> > > want to mention that. :)
> > >
> > > About the note in KIP-112 versus adding the test in KIP-113, I think it
> > > would make sense to add a short sentence stating that this scenario is
> > > covered in KIP-113. People won't necessarily read both KIPs at the same
> > > time and it's helpful to cross-reference when it makes sense.
> > >
> > > Thanks for your work on this.
> > >
> > > Ismael
> > >
> > > On Tue, Mar 14, 2017 at 11:00 PM, Dong Lin 
> wrote:
> > >
> > > > Hey Ismael,
> > > >
> > > > I get your concern that it is more likely for a disk to be slow, or
> > > exhibit
> > > > other forms of non-fatal symptom, after some known fatal error. Then
> it
> > > is
> > > > weird for user to start broker with the likely-problematic disk in
> the
> > > > broker config. In that case, I think there are two things user can
> do:
> > > >
> > > > 1) Intentionally change the log directory in the config to point to a
> > > file.
> > > > This is a bit hacky but it works well before we make more-appropriate
> > > > long-term change in Kafka to handle this case.
> > > > 2) Just don't start broker with bad log directories. Always fix disk
> > > before
> > > > restarting the broker. This is a safe approach that is no worse than
> > > > current practice.
> > > >
> > > > Would this address your concern if I specify the problem and the two
> > > > solutions in the KIP?
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > > On Tue, Mar 14, 2017 at 3:29 PM, Dong Lin 
> wrote:
> > > >
> > > > > Hey Ismael,
> > > > >
> > > > > Thanks for the comment. Please see my reply below.
> > > > >
> > > > > On Tue, Mar 14, 2017 at 10:31 AM, Ismael Juma 
> > > wrote:
> > > > >
> > > > >> Thanks Dong. Comments inline.
> > > > >>
> > > > >> On Fri, Mar 10, 2017 at 6:25 PM, Dong Lin 
> > > wrote:
> > > > >> >
> > > > >> > I get your point. But I am not sure we should recommend user to
> > > simply
> > > > >> > remove disk from the broker config. If user simply does this
> > without
> > > > >> > checking the utilization of good disks, replica on the bad disk
> > will
> > > > be
> > > > >> > re-created on the good disk and may overload the good disks,
> > causing
> > > > >> > cascading failure.
> > > > >> >
> > > > >>
> > > > >> Good point.
> > > > >>
> > > > >>
> > > > >> >
> > > > >> > I agree with you and Colin that slow disk may cause problem.
> > > However,
> > > > >> > performance degradation due to slow disk this is an existing
> > problem
> > > > >> that
> > > > >> > is not detected or handled by Kafka or KIP-112.
> > > > >>
> > > > >>
> > > > >> I think an important difference is that a number of disk errors
> are
> > > > >> currently fatal and won't be after KIP-112. So it introduces new
> > > > scenarios
> > > > >> (for example, bouncing a broker that is working fine although some
> > > disks
> > > > >> have been marked bad).
> > > > >>
> > > > >
> > > > > Hmm.. I am still trying to understand why KIP-112 creates new
> > > scenarios.
> > > > > Slow disk is not considered fatal error and won't be caught by
> either
> > > > > existing Kafka design or this KIP. If any disk is marked bad, it
> > means
> > > > > broker encounters IOException when accessing disk, most likely the
> > > broker
> > > > > will encounter IOException again when accessing this disk and mark
> > this
> > > > > disk as bad after bounce. I guess you are talking about the case
> > that a
> > > > > disk is marked bad, broker is bounced, then the disk provides
> > degraded
> > > > > performance without being marked bad, right? But this seems to be
> an
> > > > > existing problem we already have today with slow disk.
> > > > >
> > > > > Here are the possible scenarios with bad disk after broker bounce:
> > > > >
> > > > > 1) bad disk -> broker bounce -> good disk. This would be great.
> > > > > 2) bad disk -> broker bounce -> slow disk. Slow disk is 

[jira] [Updated] (KAFKA-4987) Topic creation allows invalid config values on running brokers

2017-03-30 Thread dan norwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dan norwood updated KAFKA-4987:
---
Affects Version/s: (was: 0.10.2.0)

> Topic creation allows invalid config values on running brokers
> --
>
> Key: KAFKA-4987
> URL: https://issues.apache.org/jira/browse/KAFKA-4987
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1, 0.10.1.0
>Reporter: dan norwood
>
> we use kip4 capabilities to make a `CreateTopicsRequest` for our topics. one 
> of the configs we use is `cleanup.policy=compact, delete`. this was 
> inadvertently run against a cluster that does not support that policy. the 
> result was that the topic was created, however on subsequent broker bounce 
> the broker fails to start up
> {code}
> [2017-03-23 00:00:44,837] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.config.ConfigException: Invalid value compact,delete 
> for configuration cleanup.policy: String must be one of: compact, delete
>   at 
> org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:827)
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:427)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:55)
>   at kafka.log.LogConfig.(LogConfig.scala:56)
>   at kafka.log.LogConfig$.fromProps(LogConfig.scala:192)
>   at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:598)
>   at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:597)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:597)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:183)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
>   at kafka.Kafka$.main(Kafka.scala:67)
>   at kafka.Kafka.main(Kafka.scala)
> [2017-03-23 00:00:44,839] INFO shutting down (kafka.server.KafkaServer)
> [2017-03-23 00:00:44,844] INFO shut down completed (kafka.server.KafkaServer)
> {code}
> i believe that the broker should fail when given an invalid config during 
> topic creation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-03-30 Thread Jun Rao
Hi, Dong,

I don't have further concerns. If there are no more comments from other
people, we can start the vote.

Thanks,

Jun

On Thu, Mar 30, 2017 at 10:59 AM, Dong Lin  wrote:

> Hey Jun,
>
> Thanks much for the comment! Do you think we start vote for KIP-112 and
> KIP-113 if there is no further concern?
>
> Dong
>
> On Thu, Mar 30, 2017 at 10:40 AM, Jun Rao  wrote:
>
> > Hi, Dong,
> >
> > Ok, so it seems that in solution (2), if the tool exits successfully,
> then
> > we know for sure that all replicas will be in the right log dirs.
> Solution
> > (1) doesn't guarantee that. That seems better and we can go with your
> > current solution then.
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Mar 24, 2017 at 4:28 PM, Dong Lin  wrote:
> >
> > > Hey Jun,
> > >
> > > No.. the current approach describe in the KIP (see here
> > >  > > 3A+Support+replicas+movement+between+log+directories#KIP-
> > > 113:Supportreplicasmovementbetweenlogdirectories-2)Howtoreas
> > > signreplicabetweenlogdirectoriesacrossbrokers>)
> > > also sends ChangeReplicaDirRequest before writing reassignment path in
> > ZK.
> > > I think we discussing whether ChangeReplicaDirResponse (1) shows
> success
> > or
> > > (2) should specify ReplicaNotAvailableException, if replica has not
> been
> > > created yet.
> > >
> > > Since both solution will send ChangeReplicaDirRequest before writing
> > > reassignment in ZK, their chance of creating replica in the right
> > directory
> > > is the same.
> > >
> > > To take care of the rarer case that some brokers go down immediately
> > after
> > > the reassignment tool is run, solution (1) requires reassignment tool
> to
> > > repeatedly send DescribeDirRequest and ChangeReplicaDirRequest, while
> > > solution (1) requires tool to only retry ChangeReplicaDirRequest if the
> > > response says ReplicaNotAvailableException. It seems that solution (2)
> is
> > > cleaner because ChangeReplicaDirRequest won't depend on
> > DescribeDirRequest.
> > > What do you think?
> > >
> > > Thanks,
> > > Dong
> > >
> > >
> > > On Fri, Mar 24, 2017 at 3:56 PM, Jun Rao  wrote:
> > >
> > > > Hi, Dong,
> > > >
> > > > We are just comparing whether it's better for the reassignment tool
> to
> > > > send ChangeReplicaDirRequest
> > > > (1) before or (2) after writing the reassignment path in ZK .
> > > >
> > > > In the case when all brokers are alive when the reassignment tool is
> > run,
> > > > (1) guarantees 100% that the new replicas will be in the right log
> dirs
> > > and
> > > > (2) can't.
> > > >
> > > > In the rarer case that some brokers go down immediately after the
> > > > reassignment tool is run, in either approach, there is a chance when
> > the
> > > > failed broker comes back, it will complete the pending reassignment
> > > process
> > > > by putting some replicas in the wrong log dirs.
> > > >
> > > > Implementation wise, (1) and (2) seem to be the same. So, it seems to
> > me
> > > > that (1) is better?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Thu, Mar 23, 2017 at 11:54 PM, Dong Lin 
> > wrote:
> > > >
> > > > > Hey Jun,
> > > > >
> > > > > Thanks much for the response! I agree with you that if multiple
> > > replicas
> > > > > are created in the wrong directory, we may waste resource if either
> > > > > replicaMoveThread number is low or intra.broker.throttled.rate is
> > slow.
> > > > > Then the question is whether the suggested approach increases the
> > > chance
> > > > of
> > > > > replica being created in the correct log directory.
> > > > >
> > > > > I think the answer is no due to the argument provided in the
> previous
> > > > > email. Sending ChangeReplicaDirRequest before updating znode has
> > > > negligible
> > > > > impact on the chance that the broker processes
> > ChangeReplicaDirRequest
> > > > > before LeaderAndIsrRequest from controller. If we still worry about
> > the
> > > > > order they are sent, the reassignment tool can first send
> > > > > ChangeReplicaDirRequest (so that broker remembers it in memory),
> > create
> > > > > reassignment znode, and then retry ChangeReplicaDirRequset if the
> > > > previous
> > > > > ChangeReplicaDirResponse says the replica has not been created.
> This
> > > > should
> > > > > give us the highest possible chance of creating replica in the
> > correct
> > > > > directory and avoid the problem of the suggested approach. I have
> > > updated
> > > > > "How
> > > > > to reassign replica between log directories across brokers" in the
> > KIP
> > > to
> > > > > explain this procedure.
> > > > >
> > > > > To answer your question, the reassignment tool should fail with
> with
> > > > proper
> > > > > error message if user has specified log directory for a replica on
> an
> > > > > offline broker.  This is reasonable because reassignment tool can
> not
> > > > > guarantee that the replica will be moved to the specified log
> > directory
> > > > if
> > > > > the broker is 

[jira] [Commented] (KAFKA-4810) SchemaBuilder should be more lax about checking that fields are unset if they are being set to the same value

2017-03-30 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949951#comment-15949951
 ] 

Ewen Cheslack-Postava commented on KAFKA-4810:
--

[~pshk4r] Great! You're already a contributor on the Kafka project so when you 
want to pick up a ticket you can just assign it to yourself so someone else 
doesn't accidentally work on it in parallel.

> SchemaBuilder should be more lax about checking that fields are unset if they 
> are being set to the same value
> -
>
> Key: KAFKA-4810
> URL: https://issues.apache.org/jira/browse/KAFKA-4810
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Ewen Cheslack-Postava
>  Labels: newbie
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Currently SchemaBuilder is strict when checking that certain fields have not 
> been set yet (e.g. version, name, doc). It just checks that the field is 
> null. This is intended to protect the user from buggy code that overwrites a 
> field with different values, but it's a bit too strict currently. In generic 
> code for converting schemas (e.g. Converters) you will sometimes initialize a 
> builder with these values (e.g. because you get a SchemaBuilder for a logical 
> type, which sets name & version), but then have generic code for setting name 
> & version from the source schema.
> We saw this bug in practice with Confluent's AvroConverter, so it's likely it 
> could trip up others as well. You can work around the issue, but it would be 
> nice if exceptions were only thrown if you try to overwrite an existing value 
> with a different value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4344) Exception when accessing partition, offset and timestamp in processor class

2017-03-30 Thread saiprasad mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949949#comment-15949949
 ] 

saiprasad mishra commented on KAFKA-4344:
-

Hi All
Lot of people reached out to me on how to use existing spring beans.

If anybody is looking for example on how to initialize kafka streams in spring 
boot app and invoke your existing spring beans from kafka streams processor 
class pls. find the code sample in below gist (Sorry for lack of a sample 
project for now coz of lack of time, but it will come soon)

https://gist.github.com/saiprasadmishra/8362134f87ae84e8183eca3b1afcf23f



> Exception when accessing partition, offset and timestamp in processor class
> ---
>
> Key: KAFKA-4344
> URL: https://issues.apache.org/jira/browse/KAFKA-4344
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: saiprasad mishra
>Assignee: Guozhang Wang
>Priority: Minor
>
> I have a kafka stream pipeline like below
> source topic stream -> filter for null value ->map to make it keyed by id 
> ->custom processor to mystore ->to another topic -> ktable
> I am hitting the below type of exception in a custom processor class if I try 
> to access offset() or partition() or timestamp() from the ProcessorContext in 
> the process() method. I was hoping it would return the partition and offset 
> for the enclosing topic(in this case source topic) where its consuming from 
> or -1 based on the api docs.
> java.lang.IllegalStateException: This should not happen as offset() should 
> only be called while a record is processed
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.offset(ProcessorContextImpl.java:181)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at com.sai.repo.MyStore.process(MyStore.java:72) ~[classes!/:?]
>   at com.sai.repo.MyStore.process(MyStore.java:39) ~[classes!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:43)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:181)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
>  ~[kafka-streams-0.10.1.0.jar!/:?]
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
>  [kafka-streams-0.10.1.0.jar!/:?]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4988) JVM crash when running on Alpine Linux

2017-03-30 Thread Vincent Rischmann (JIRA)
Vincent Rischmann created KAFKA-4988:


 Summary: JVM crash when running on Alpine Linux
 Key: KAFKA-4988
 URL: https://issues.apache.org/jira/browse/KAFKA-4988
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Vincent Rischmann
Priority: Minor


I'm developing my Kafka Streams application using Docker and I run my jars 
using the official openjdk:8-jre-alpine image.

I'm just starting to use windowing and now the JVM crashes because of an issue 
with RocksDB I think.

It's trivial to fix on my part, just use the debian jessie based image. 
However, it would be cool if alpine was supported too since its docker images 
are quite a bit less heavy

{quote}
Exception in thread "StreamThread-1" java.lang.UnsatisfiedLinkError: 
/tmp/librocksdbjni3285995384052305662.so: Error loading shared library 
ld-linux-x86-64.so.2: No such file or directory (needed by 
/tmp/librocksdbjni3285995384052305662.so)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at 
org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
at 
org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
at org.rocksdb.RocksDB.(RocksDB.java:35)
at org.rocksdb.Options.(Options.java:22)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:115)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:148)
at 
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:39)
at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:131)
at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62)
at 
org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)
at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:141)
at 
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:834)
at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1207)
at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
at 
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:937)
at 
org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:69)
at 
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:236)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:255)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:339)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7f60f34ce088, pid=1, tid=0x7f60f3705ab0
#
# JRE version: OpenJDK Runtime Environment (8.0_121-b13) (build 1.8.0_121-b13)
# Java VM: OpenJDK 64-Bit Server VM (25.121-b13 mixed mode linux-amd64 
compressed oops)
# Derivative: IcedTea 3.3.0
# Distribution: Custom build (Thu Feb  9 08:34:09 GMT 2017)
# Problematic frame:
# C  [ld-musl-x86_64.so.1+0x50088]  memcpy+0x24
#
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /usr/local/event-counter/hs_err_pid1.log
#
# If you would like to submi

[GitHub] kafka pull request #2767: Vagrant provisioning fixes

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2767


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2762: MINOR: Ensure streaming iterator is closed by Fetc...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2762


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4987) Topic creation allows invalid config values on running brokers

2017-03-30 Thread dan norwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dan norwood updated KAFKA-4987:
---
Affects Version/s: 0.10.0.1

> Topic creation allows invalid config values on running brokers
> --
>
> Key: KAFKA-4987
> URL: https://issues.apache.org/jira/browse/KAFKA-4987
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1, 0.10.1.0, 0.10.2.0
>Reporter: dan norwood
>
> we use kip4 capabilities to make a `CreateTopicsRequest` for our topics. one 
> of the configs we use is `cleanup.policy=compact, delete`. this was 
> inadvertently run against a cluster that does not support that policy. the 
> result was that the topic was created, however on subsequent broker bounce 
> the broker fails to start up
> {code}
> [2017-03-23 00:00:44,837] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.config.ConfigException: Invalid value compact,delete 
> for configuration cleanup.policy: String must be one of: compact, delete
>   at 
> org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:827)
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:427)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:55)
>   at kafka.log.LogConfig.(LogConfig.scala:56)
>   at kafka.log.LogConfig$.fromProps(LogConfig.scala:192)
>   at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:598)
>   at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:597)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:597)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:183)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
>   at kafka.Kafka$.main(Kafka.scala:67)
>   at kafka.Kafka.main(Kafka.scala)
> [2017-03-23 00:00:44,839] INFO shutting down (kafka.server.KafkaServer)
> [2017-03-23 00:00:44,844] INFO shut down completed (kafka.server.KafkaServer)
> {code}
> i believe that the broker should fail when given an invalid config during 
> topic creation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4986) Add producer per task support

2017-03-30 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4986:
---
Description: Add new config parameter {{processing_guarantee}} and enable 
"producer per task" initialization of new config is set to {{exactly_once}}.

> Add producer per task support
> -
>
> Key: KAFKA-4986
> URL: https://issues.apache.org/jira/browse/KAFKA-4986
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> Add new config parameter {{processing_guarantee}} and enable "producer per 
> task" initialization of new config is set to {{exactly_once}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4986) Add producer per task support

2017-03-30 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4986:
---
Assignee: Matthias J. Sax
  Status: Patch Available  (was: Open)

https://github.com/apache/kafka/pull/2773

> Add producer per task support
> -
>
> Key: KAFKA-4986
> URL: https://issues.apache.org/jira/browse/KAFKA-4986
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4987) Topic creation allows invalid config values on running brokers

2017-03-30 Thread dan norwood (JIRA)
dan norwood created KAFKA-4987:
--

 Summary: Topic creation allows invalid config values on running 
brokers
 Key: KAFKA-4987
 URL: https://issues.apache.org/jira/browse/KAFKA-4987
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.0, 0.10.1.0
Reporter: dan norwood


we use kip4 capabilities to make a `CreateTopicsRequest` for our topics. one of 
the configs we use is `cleanup.policy=compact, delete`. this was inadvertently 
run against a cluster that does not support that policy. the result was that 
the topic was created, however on subsequent broker bounce the broker fails to 
start up

{code}
[2017-03-23 00:00:44,837] FATAL Fatal error during KafkaServer startup. Prepare 
to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.config.ConfigException: Invalid value compact,delete 
for configuration cleanup.policy: String must be one of: compact, delete
at 
org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:827)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:427)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:55)
at kafka.log.LogConfig.(LogConfig.scala:56)
at kafka.log.LogConfig$.fromProps(LogConfig.scala:192)
at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:598)
at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:597)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
at 
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at 
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at 
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:597)
at kafka.server.KafkaServer.startup(KafkaServer.scala:183)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-03-23 00:00:44,839] INFO shutting down (kafka.server.KafkaServer)
[2017-03-23 00:00:44,844] INFO shut down completed (kafka.server.KafkaServer)
{code}

i believe that the broker should fail when given an invalid config during topic 
creation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-4923) Add Exactly-Once Semantics

2017-03-30 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4923 started by Matthias J. Sax.
--
> Add Exactly-Once Semantics
> --
>
> Key: KAFKA-4923
> URL: https://issues.apache.org/jira/browse/KAFKA-4923
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>  Labels: kip
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-129%3A+Streams+Exactly-Once+Semantics



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4986) Add producer per task support

2017-03-30 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-4986:
--

 Summary: Add producer per task support
 Key: KAFKA-4986
 URL: https://issues.apache.org/jira/browse/KAFKA-4986
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Matthias J. Sax






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2773: Add exactly-once config to StreamsConfig

2017-03-30 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2773

Add exactly-once config to StreamsConfig

Enable producer per task if exactly-once config is enabled.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
exactly-once-streams-producer-per-task

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2773.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2773


commit 7122af6960cf16b80cd9fd36ac53474a0a673715
Author: Matthias J. Sax 
Date:   2017-03-21T20:00:34Z

Add exactly-once config to StreamsConfig
Enable producer per task if exactly-once config is enabled




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2764: MINOR: reduce amount of verbose printing

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2764


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-86: Configurable SASL callback handlers

2017-03-30 Thread Rajini Sivaram
I have made a minor change to the callback handler interface to pass in the
JAAS configuration entries in *configure,* to work with the multiple
listener configuration introduced in KIP-103. I have also renamed the
interface to AuthenticateCallbackHandler instead of AuthCallbackHandler to
avoid confusion with authorization.

I have rebased and updated the PR (https://github.com/apache/kafka/pull/2022)
as well. Please let me know if there are any other comments or suggestions
to move this forward.

Thank you...

Regards,

Rajini


On Thu, Dec 15, 2016 at 3:11 PM, Rajini Sivaram  wrote:

> Ismael,
>
> The reason for choosing CallbackHandler interface as the configurable
> interface is flexibility. As you say, we could instead define a simpler
> PlainCredentialProvider and ScramCredentialProvider. But that would tie
> users to Kafka's SaslServer implementation for PLAIN and SCRAM.
> SaslServer/SaslClient implementations are already pluggable using standard
> Java security provider mechanism. Callback handlers are the configuration
> mechanism for SaslServer/SaslClient. By making the handlers configurable,
> SASL becomes fully configurable for mechanisms supported by Kafka as well
> as custom mechanisms. From the 'Scenarios' section in the KIP, a simpler
> PLAIN/SCRAM interface satisfies the first two, but configurable callback
> handlers enables all five. I agree that most users don't require this level
> of flexibility, but we have had discussions about custom mechanisms in the
> past for integration with existing authentication servers. So I think it is
> a feature worth supporting.
>
> On Thu, Dec 15, 2016 at 2:21 PM, Ismael Juma  wrote:
>
> > Thanks Rajini, your answers make sense to me. One more general point: we
> > are following the JAAS callback architecture and exposing that to the
> user
> > where the user has to write code like:
> >
> > @Override
> > public void handle(Callback[] callbacks) throws IOException,
> > UnsupportedCallbackException {
> > String username = null;
> > for (Callback callback: callbacks) {
> > if (callback instanceof NameCallback)
> > username = ((NameCallback) callback).getDefaultName();
> > else if (callback instanceof PlainAuthenticateCallback) {
> > PlainAuthenticateCallback plainCallback =
> > (PlainAuthenticateCallback) callback;
> > boolean authenticated = authenticate(username,
> > plainCallback.password());
> > plainCallback.authenticated(authenticated);
> > } else
> > throw new UnsupportedCallbackException(callback);
> > }
> > }
> >
> > protected boolean authenticate(String username, char[] password)
> throws
> > IOException {
> > if (username == null)
> > return false;
> > else {
> > String expectedPassword =
> > JaasUtils.jaasConfig(LoginType.SERVER.contextName(), "user_" + username,
> > PlainLoginModule.class.getName());
> > return Arrays.equals(password, expectedPassword.toCharArray()
> > );
> > }
> > }
> >
> > Since we need to create a new callback type for Plain, Scram and so on,
> is
> > it really worth it to do it this way? For example, in the code above, the
> > `authenticate` method could be the only thing the user has to implement
> and
> > we could do the necessary work to unwrap the data from the various
> > callbacks when interacting with the SASL API. More work for us, but a
> much
> > more pleasant API for users. What are the drawbacks?
> >
> > Ismael
> >
> > On Thu, Dec 15, 2016 at 1:06 AM, Rajini Sivaram 
> > wrote:
> >
> > > Ismael,
> > >
> > > 1. At the moment AuthCallbackHandler is not a public interface, so I am
> > > assuming that it can be modified. Yes, agree that we should keep
> > non-public
> > > methods separate. Will do that as part of the implementation of this
> KIP.
> > >
> > > 2. Callback handlers do tend to depend on ordering, including those
> > > included in the JVM and these in Kafka. I have specified the ordering
> in
> > > the KIP. Will make sure they get included in documentation too.
> > >
> > > 3. Added a note to the KIP. Kafka needs access to the SCRAM credentials
> > to
> > > perform SCRAM authentication. For PLAIN, Kafka only needs to know if
> the
> > > password is valid for the user. We want to support external
> > authentication
> > > servers whose interface is to validate password, not retrieve it.
> > >
> > > 4. Added code of ScramCredential to the KIP.
> > >
> > >
> > > On Wed, Dec 14, 2016 at 3:54 PM, Ismael Juma 
> wrote:
> > >
> > > > Thanks Rajini, that helps. A few comments:
> > > >
> > > > 1. The `AuthCallbackHandler` interface already exists and we are
> making
> > > > breaking changes (removing a parameter from `configure` and adding
> > > > additional methods). Is the reasoning that it was not a public
> > interface
> > > > before? It would be good to clearly separate public versus non-publ

[jira] [Commented] (KAFKA-4980) testReprocessingFromScratch unit test failure

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949749#comment-15949749
 ] 

ASF GitHub Bot commented on KAFKA-4980:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2757


> testReprocessingFromScratch unit test failure
> -
>
> Key: KAFKA-4980
> URL: https://issues.apache.org/jira/browse/KAFKA-4980
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Got this error in a PR: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/2524/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic/
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic
> Failing for the past 1 build (Since Failed#2524 )
> Took 0.18 sec.
> Error Message
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.
> Stacktrace
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2757: KAFKA-4980: testReprocessingFromScratch unit test ...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2757


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4980) testReprocessingFromScratch unit test failure

2017-03-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4980:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2757
[https://github.com/apache/kafka/pull/2757]

> testReprocessingFromScratch unit test failure
> -
>
> Key: KAFKA-4980
> URL: https://issues.apache.org/jira/browse/KAFKA-4980
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Got this error in a PR: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/2524/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic/
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic
> Failing for the past 1 build (Since Failed#2524 )
> Took 0.18 sec.
> Error Message
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.
> Stacktrace
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-129: Kafka Streams Exactly-Once Semantics

2017-03-30 Thread Matthias J. Sax
+1

I am closing this vote now. The KIP got accepted with

+3 binding (Jason, Jay, Ram) and
+6 non-binding (Eno, Damian, Apruva, Bill, Michael, Matthias)

votes.


Thanks for voting!

-Matthias

On 3/29/17 10:27 AM, Michael Noll wrote:
> +1 (non-binding)
> 
> On Wed, Mar 29, 2017 at 6:38 PM, Sriram Subramanian 
> wrote:
> 
>> +1
>>
>> On Wed, Mar 29, 2017 at 9:36 AM, Bill Bejeck  wrote:
>>
>>> +1 (non-binding)
>>>
>>> Thanks Matthias,
>>> Bill
>>>
>>> On Wed, Mar 29, 2017 at 12:18 PM, Apurva Mehta 
>>> wrote:
>>>
 +1 (non-binding)

 On Wed, Mar 29, 2017 at 9:17 AM, Jay Kreps  wrote:

> +1
>
> -Jay
>
> On Mon, Mar 20, 2017 at 11:27 AM, Matthias J. Sax <
>>> matth...@confluent.io
>
> wrote:
>
>> Hi,
>>
>> I would like to start the vote for KIP-129. Of course, feel free to
>> provide some more feedback on the DISCUSS thread.
>>
>> Thanks a lot!
>>
>>
>> -Matthias
>>
>>
>

>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Commented] (KAFKA-4980) testReprocessingFromScratch unit test failure

2017-03-30 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949694#comment-15949694
 ] 

Matthias J. Sax commented on KAFKA-4980:


PR: https://github.com/apache/kafka/pull/2757

> testReprocessingFromScratch unit test failure
> -
>
> Key: KAFKA-4980
> URL: https://issues.apache.org/jira/browse/KAFKA-4980
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Got this error in a PR: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/2524/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic/
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic
> Failing for the past 1 build (Since Failed#2524 )
> Took 0.18 sec.
> Error Message
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.
> Stacktrace
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4980) testReprocessingFromScratch unit test failure

2017-03-30 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4980:
---
Status: Patch Available  (was: Open)

> testReprocessingFromScratch unit test failure
> -
>
> Key: KAFKA-4980
> URL: https://issues.apache.org/jira/browse/KAFKA-4980
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Got this error in a PR: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/2524/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic/
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic
> Failing for the past 1 build (Since Failed#2524 )
> Took 0.18 sec.
> Error Message
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.
> Stacktrace
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Jason Gustafson
+1 Thanks for the KIP!

On Thu, Mar 30, 2017 at 12:51 PM, Guozhang Wang  wrote:

> +1
>
> Sorry about the previous email, Gmail seems be collapsing them into a
> single thread on my inbox.
>
> Guozhang
>
> On Thu, Mar 30, 2017 at 11:34 AM, Guozhang Wang 
> wrote:
>
> > Damian, could you create a new thread for the voting process?
> >
> > Thanks!
> >
> > Guozhang
> >
> > On Thu, Mar 30, 2017 at 10:33 AM, Bill Bejeck  wrote:
> >
> >> +1(non-binding)
> >>
> >> On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska 
> >> wrote:
> >>
> >> > +1 (non binding)
> >> >
> >> > Thanks
> >> > Eno
> >> > > On 30 Mar 2017, at 18:01, Matthias J. Sax 
> >> wrote:
> >> > >
> >> > > +1
> >> > >
> >> > > On 3/30/17 3:46 AM, Damian Guy wrote:
> >> > >> Hi All,
> >> > >>
> >> > >> I'd like to start the voting thread on KIP-134:
> >> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > 134%3A+Delay+initial+consumer+group+rebalance
> >> > >>
> >> > >> Thanks,
> >> > >> Damian
> >> > >>
> >> > >
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Guozhang Wang
+1

Sorry about the previous email, Gmail seems be collapsing them into a
single thread on my inbox.

Guozhang

On Thu, Mar 30, 2017 at 11:34 AM, Guozhang Wang  wrote:

> Damian, could you create a new thread for the voting process?
>
> Thanks!
>
> Guozhang
>
> On Thu, Mar 30, 2017 at 10:33 AM, Bill Bejeck  wrote:
>
>> +1(non-binding)
>>
>> On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska 
>> wrote:
>>
>> > +1 (non binding)
>> >
>> > Thanks
>> > Eno
>> > > On 30 Mar 2017, at 18:01, Matthias J. Sax 
>> wrote:
>> > >
>> > > +1
>> > >
>> > > On 3/30/17 3:46 AM, Damian Guy wrote:
>> > >> Hi All,
>> > >>
>> > >> I'd like to start the voting thread on KIP-134:
>> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > 134%3A+Delay+initial+consumer+group+rebalance
>> > >>
>> > >> Thanks,
>> > >> Damian
>> > >>
>> > >
>> >
>> >
>>
>
>
>
> --
> -- Guozhang
>



-- 
-- Guozhang


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Damian Guy
Hi Guozhang,
This was a new thread!

Damian
On Thu, 30 Mar 2017 at 19:34, Guozhang Wang  wrote:

> Damian, could you create a new thread for the voting process?
>
> Thanks!
>
> Guozhang
>
> On Thu, Mar 30, 2017 at 10:33 AM, Bill Bejeck  wrote:
>
> > +1(non-binding)
> >
> > On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska 
> > wrote:
> >
> > > +1 (non binding)
> > >
> > > Thanks
> > > Eno
> > > > On 30 Mar 2017, at 18:01, Matthias J. Sax 
> > wrote:
> > > >
> > > > +1
> > > >
> > > > On 3/30/17 3:46 AM, Damian Guy wrote:
> > > >> Hi All,
> > > >>
> > > >> I'd like to start the voting thread on KIP-134:
> > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 134%3A+Delay+initial+consumer+group+rebalance
> > > >>
> > > >> Thanks,
> > > >> Damian
> > > >>
> > > >
> > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Commented] (KAFKA-4208) Add Record Headers

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949654#comment-15949654
 ] 

ASF GitHub Bot commented on KAFKA-4208:
---

GitHub user michaelandrepearce opened a pull request:

https://github.com/apache/kafka/pull/2772

KAFKA-4208: Add Record Headers

As per KIP-82

Adding record headers api to ProducerRecord, ConsumerRecord
Support to convert from protocol to api added Kafka Producer, Kafka Fetcher 
(Consumer)
Updated MirrorMaker, ConsoleConsumer and scala BaseConsumer
Add RecordHeaders and RecordHeader implementation of the interfaces Headers 
and Header

Some bits using are reverted to being Java 7 compatible, for the moment 
until KIP-118 is implemented.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/IG-Group/kafka KIP-82

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2772.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2772


commit f70c094ce5dcbd8be4d348c6dafc170ae94678dd
Author: Michael Andre Pearce 
Date:   2017-03-30T19:24:48Z

KAFKA-4208: Add Record Headers

As per KIP-82

Adding record headers api to ProducerRecord, ConsumerRecord
Support to convert from protocol to api added Kafka Producer, Kafka Fetcher 
(Consumer)
Updated MirrorMaker, ConsoleConsumer and scala BaseConsumer
Add RecordHeaders and RecordHeader implementation of the interfaces Headers 
and Header




> Add Record Headers
> --
>
> Key: KAFKA-4208
> URL: https://issues.apache.org/jira/browse/KAFKA-4208
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Michael Andre Pearce (IG)
>Priority: Critical
>
> Currently headers are not natively supported unlike many transport and 
> messaging platforms or standard, this is to add support for headers to kafka
> This JIRA is related to KIP found here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2772: KAFKA-4208: Add Record Headers

2017-03-30 Thread michaelandrepearce
GitHub user michaelandrepearce opened a pull request:

https://github.com/apache/kafka/pull/2772

KAFKA-4208: Add Record Headers

As per KIP-82

Adding record headers api to ProducerRecord, ConsumerRecord
Support to convert from protocol to api added Kafka Producer, Kafka Fetcher 
(Consumer)
Updated MirrorMaker, ConsoleConsumer and scala BaseConsumer
Add RecordHeaders and RecordHeader implementation of the interfaces Headers 
and Header

Some bits using are reverted to being Java 7 compatible, for the moment 
until KIP-118 is implemented.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/IG-Group/kafka KIP-82

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2772.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2772


commit f70c094ce5dcbd8be4d348c6dafc170ae94678dd
Author: Michael Andre Pearce 
Date:   2017-03-30T19:24:48Z

KAFKA-4208: Add Record Headers

As per KIP-82

Adding record headers api to ProducerRecord, ConsumerRecord
Support to convert from protocol to api added Kafka Producer, Kafka Fetcher 
(Consumer)
Updated MirrorMaker, ConsoleConsumer and scala BaseConsumer
Add RecordHeaders and RecordHeader implementation of the interfaces Headers 
and Header




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-4980) testReprocessingFromScratch unit test failure

2017-03-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-4980:


Assignee: Matthias J. Sax

> testReprocessingFromScratch unit test failure
> -
>
> Key: KAFKA-4980
> URL: https://issues.apache.org/jira/browse/KAFKA-4980
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>
> Got this error in a PR: 
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/2524/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic/
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic
> Failing for the past 1 build (Since Failed#2524 )
> Took 0.18 sec.
> Error Message
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.
> Stacktrace
> org.apache.kafka.common.errors.TopicExistsException: Topic 'inputTopic' 
> already exists.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Guozhang Wang
Damian, could you create a new thread for the voting process?

Thanks!

Guozhang

On Thu, Mar 30, 2017 at 10:33 AM, Bill Bejeck  wrote:

> +1(non-binding)
>
> On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska 
> wrote:
>
> > +1 (non binding)
> >
> > Thanks
> > Eno
> > > On 30 Mar 2017, at 18:01, Matthias J. Sax 
> wrote:
> > >
> > > +1
> > >
> > > On 3/30/17 3:46 AM, Damian Guy wrote:
> > >> Hi All,
> > >>
> > >> I'd like to start the voting thread on KIP-134:
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 134%3A+Delay+initial+consumer+group+rebalance
> > >>
> > >> Thanks,
> > >> Damian
> > >>
> > >
> >
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #2653: MINOR: Update possible errors in OffsetFetchRespon...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2653


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1388

2017-03-30 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: FetchRequest.Builder maxBytes for version <3

--
[...truncated 1.54 MB...]

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldLogPuts STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldLogPuts PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldInitUnderlyingStore STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldInitUnderlyingStore PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldLogRemoves STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldLogRemoves PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldDelegateToUnderlyingStoreWhenFetching STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStoreTest > 
shouldDelegateToUnderlyingStoreWhenFetching PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
shouldPutAllKeyValuePairs STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
shouldPutAllKeyValuePairs PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
shouldUpdateValuesForExistingKeysOnPutAll STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
shouldUpdateValuesForExistingKeysOnPutAll PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformAllQueriesWithCachingDisabled STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformAllQueriesWithCachingDisabled PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformRangeQueriesWithCachingDisabled STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldPerformRangeQueriesWithCachingDisabled PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldCloseOpenIteratorsWhenStoreClosedAndThrowInvalidStateStoreOnHasNextAndNext
 PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRa

[jira] [Commented] (KAFKA-4689) OffsetValidationTest fails validation with "Current position greater than the total number of consumed records"

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949532#comment-15949532
 ] 

ASF GitHub Bot commented on KAFKA-4689:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2771

KAFKA-4689: Disable system tests for consumer hard failures

See the JIRA for the full details. Essentially the test assertions depend 
on receiving reliable events from the consumer processes, but this is not 
generally possible in the presence of a hard failure (i.e. `kill -9`). Until we 
solve this problem, the hard failure scenarios will be turned off.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2771.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2771


commit 55380bf623f2a5be489818fc5979d77fd7570047
Author: Jason Gustafson 
Date:   2017-03-30T18:15:01Z

KAFKA-4689: Disable system tests for consumer hard failures




> OffsetValidationTest fails validation with "Current position greater than the 
> total number of consumed records"
> ---
>
> Key: KAFKA-4689
> URL: https://issues.apache.org/jira/browse/KAFKA-4689
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, system tests
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>  Labels: system-test-failure
>
> {quote}
> 
> test_id:
> kafkatest.tests.client.consumer_test.OffsetValidationTest.test_consumer_bounce.clean_shutdown=False.bounce_mode=all
> status: FAIL
> run time:   1 minute 49.834 seconds
> Current position greater than the total number of consumed records
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/client/consumer_test.py",
>  line 157, in test_consumer_bounce
> "Current position greater than the total number of consumed records"
> AssertionError: Current position greater than the total number of consumed 
> records
> {quote}
> See also 
> https://issues.apache.org/jira/browse/KAFKA-3513?focusedCommentId=15791790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15791790
>  which is another instance of this bug, which indicates the issue goes back 
> at least as far as 1/17/2017. Note that I don't think we've seen this in 
> 0.10.1 yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2771: KAFKA-4689: Disable system tests for consumer hard...

2017-03-30 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2771

KAFKA-4689: Disable system tests for consumer hard failures

See the JIRA for the full details. Essentially the test assertions depend 
on receiving reliable events from the consumer processes, but this is not 
generally possible in the presence of a hard failure (i.e. `kill -9`). Until we 
solve this problem, the hard failure scenarios will be turned off.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2771.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2771


commit 55380bf623f2a5be489818fc5979d77fd7570047
Author: Jason Gustafson 
Date:   2017-03-30T18:15:01Z

KAFKA-4689: Disable system tests for consumer hard failures




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4689) OffsetValidationTest fails validation with "Current position greater than the total number of consumed records"

2017-03-30 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949526#comment-15949526
 ] 

Jason Gustafson commented on KAFKA-4689:


I think I finally see what is happening here. I looked at this test case: 
http://confluent-systest.s3-website-us-west-2.amazonaws.com/confluent-kafka-0-10-2-system-test-results/?prefix=2017-03-30--001.1490888066--apache--0.10.2--2397269/.
 

Here is the assertion error:
{code}
AssertionError: Current position 30897 greater than the total number of 
consumed records 30892
{code}

We are missing 5 records apparently. Looking in the test logs, I found this 
warning:
{code}
[WARNING - 2017-03-30 11:01:19,615 - verifiable_consumer - 
_update_global_position - lineno:222]: Expected next consumed offset of 20037 
for partition TopicPartition(topic=u'test_topic', partition=0), but instead saw 
20042
{code}

This suggests a gap in the consumed data. However, in the event output, we see 
clearly that these offsets were consumed:
{code}
{"timestamp":1490871669110,"count":5,"name":"records_consumed","partitions":[{"topic":"test_topic","partition":0,"count":5,"minOffset":20037,"maxOffset":20041}]}
{code}

And from the consumer log, we can see the offset is committed, though we do not 
see the offset commit in the event output:
{code}
[2017-03-30 11:01:09,110] TRACE Sending OffsetCommit request with 
{test_topic-0=OffsetAndMetadata{offset=20042, metadata=''}} to coordinator 
worker9:9092 (id: 2147483646 rack: null) for group test_group_id 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
{code}

We can conclude that both the "records_consumed" event and the 
"offsets_committed" event were not delivered to the test framework prior to the 
hard failure. In general, it seems difficult to write reliable hard failure 
test cases which depends on event output from the process experiencing the hard 
failure. There is always a delay between the occurrence of the event and its 
emission and delivery. We probably need to weaken the test assertions to deal 
with the possibility of having missed events, but it's possible that the test 
is longer be useful if we do so. For the time being, I think we should disable 
the hard failure scenarios.

> OffsetValidationTest fails validation with "Current position greater than the 
> total number of consumed records"
> ---
>
> Key: KAFKA-4689
> URL: https://issues.apache.org/jira/browse/KAFKA-4689
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, system tests
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>  Labels: system-test-failure
>
> {quote}
> 
> test_id:
> kafkatest.tests.client.consumer_test.OffsetValidationTest.test_consumer_bounce.clean_shutdown=False.bounce_mode=all
> status: FAIL
> run time:   1 minute 49.834 seconds
> Current position greater than the total number of consumed records
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/client/consumer_test.py",
>  line 157, in test_consumer_bounce
> "Current position greater than the total number of consumed records"
> AssertionError: Current position greater than the total number of consumed 
> records
> {quote}
> See also 
> https://issues.apache.org/jira/browse/KAFKA-3513?focusedCommentId=15791790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15791790
>  which is another instance of this bug, which indicates the issue goes back 
> at least as far as 1/17/2017. Note that I don't think we've seen this in 
> 0.10.1 yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-03-30 Thread Dong Lin
Hey Jun,

Thanks much for the comment! Do you think we start vote for KIP-112 and
KIP-113 if there is no further concern?

Dong

On Thu, Mar 30, 2017 at 10:40 AM, Jun Rao  wrote:

> Hi, Dong,
>
> Ok, so it seems that in solution (2), if the tool exits successfully, then
> we know for sure that all replicas will be in the right log dirs. Solution
> (1) doesn't guarantee that. That seems better and we can go with your
> current solution then.
>
> Thanks,
>
> Jun
>
> On Fri, Mar 24, 2017 at 4:28 PM, Dong Lin  wrote:
>
> > Hey Jun,
> >
> > No.. the current approach describe in the KIP (see here
> >  > 3A+Support+replicas+movement+between+log+directories#KIP-
> > 113:Supportreplicasmovementbetweenlogdirectories-2)Howtoreas
> > signreplicabetweenlogdirectoriesacrossbrokers>)
> > also sends ChangeReplicaDirRequest before writing reassignment path in
> ZK.
> > I think we discussing whether ChangeReplicaDirResponse (1) shows success
> or
> > (2) should specify ReplicaNotAvailableException, if replica has not been
> > created yet.
> >
> > Since both solution will send ChangeReplicaDirRequest before writing
> > reassignment in ZK, their chance of creating replica in the right
> directory
> > is the same.
> >
> > To take care of the rarer case that some brokers go down immediately
> after
> > the reassignment tool is run, solution (1) requires reassignment tool to
> > repeatedly send DescribeDirRequest and ChangeReplicaDirRequest, while
> > solution (1) requires tool to only retry ChangeReplicaDirRequest if the
> > response says ReplicaNotAvailableException. It seems that solution (2) is
> > cleaner because ChangeReplicaDirRequest won't depend on
> DescribeDirRequest.
> > What do you think?
> >
> > Thanks,
> > Dong
> >
> >
> > On Fri, Mar 24, 2017 at 3:56 PM, Jun Rao  wrote:
> >
> > > Hi, Dong,
> > >
> > > We are just comparing whether it's better for the reassignment tool to
> > > send ChangeReplicaDirRequest
> > > (1) before or (2) after writing the reassignment path in ZK .
> > >
> > > In the case when all brokers are alive when the reassignment tool is
> run,
> > > (1) guarantees 100% that the new replicas will be in the right log dirs
> > and
> > > (2) can't.
> > >
> > > In the rarer case that some brokers go down immediately after the
> > > reassignment tool is run, in either approach, there is a chance when
> the
> > > failed broker comes back, it will complete the pending reassignment
> > process
> > > by putting some replicas in the wrong log dirs.
> > >
> > > Implementation wise, (1) and (2) seem to be the same. So, it seems to
> me
> > > that (1) is better?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Thu, Mar 23, 2017 at 11:54 PM, Dong Lin 
> wrote:
> > >
> > > > Hey Jun,
> > > >
> > > > Thanks much for the response! I agree with you that if multiple
> > replicas
> > > > are created in the wrong directory, we may waste resource if either
> > > > replicaMoveThread number is low or intra.broker.throttled.rate is
> slow.
> > > > Then the question is whether the suggested approach increases the
> > chance
> > > of
> > > > replica being created in the correct log directory.
> > > >
> > > > I think the answer is no due to the argument provided in the previous
> > > > email. Sending ChangeReplicaDirRequest before updating znode has
> > > negligible
> > > > impact on the chance that the broker processes
> ChangeReplicaDirRequest
> > > > before LeaderAndIsrRequest from controller. If we still worry about
> the
> > > > order they are sent, the reassignment tool can first send
> > > > ChangeReplicaDirRequest (so that broker remembers it in memory),
> create
> > > > reassignment znode, and then retry ChangeReplicaDirRequset if the
> > > previous
> > > > ChangeReplicaDirResponse says the replica has not been created. This
> > > should
> > > > give us the highest possible chance of creating replica in the
> correct
> > > > directory and avoid the problem of the suggested approach. I have
> > updated
> > > > "How
> > > > to reassign replica between log directories across brokers" in the
> KIP
> > to
> > > > explain this procedure.
> > > >
> > > > To answer your question, the reassignment tool should fail with with
> > > proper
> > > > error message if user has specified log directory for a replica on an
> > > > offline broker.  This is reasonable because reassignment tool can not
> > > > guarantee that the replica will be moved to the specified log
> directory
> > > if
> > > > the broker is offline. If all brokers are online, the reassignment
> tool
> > > may
> > > > hung up to 10 seconds (by default) to retry ChangeReplicaDirRequest
> if
> > > any
> > > > replica has not been created already. User can change this timeout
> > value
> > > > using the newly-added --timeout argument of the reassignment tool.
> This
> > > is
> > > > specified in the Public Interface section in the KIP. The
> reassignment
> > > tool
> > > > will only block if user uses t

[jira] [Commented] (KAFKA-4973) Transient failure of AdminClientTest.testDeleteRecordsWithException

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949500#comment-15949500
 ] 

ASF GitHub Bot commented on KAFKA-4973:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2760


> Transient failure of AdminClientTest.testDeleteRecordsWithException
> ---
>
> Key: KAFKA-4973
> URL: https://issues.apache.org/jira/browse/KAFKA-4973
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Dong Lin
>  Labels: transient-unit-test-failure
> Fix For: 0.11.0.0
>
>
> One of the tests introduced as part of KIP-107 seems to be failing 
> transiently. [~lindong], can you please take a look?
> {code}
> java.lang.AssertionError: 
> expected:  The request timed out.)> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1381/testReport/junit/kafka.api/AdminClientTest/testDeleteRecordsWithException/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2760: KAFKA-4973; Fix transient failure of AdminClientTe...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2760


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4692) Transient test failure in org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949491#comment-15949491
 ] 

ASF GitHub Bot commented on KAFKA-4692:
---

Github user original-brownbear closed the pull request at:

https://github.com/apache/kafka/pull/2761


> Transient test failure in 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest
> -
>
> Key: KAFKA-4692
> URL: https://issues.apache.org/jira/browse/KAFKA-4692
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Armin Braun
>
> Seen a couple of failures on at least the following two test cases:
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest.shouldReduce
> {code}
> Error Message
> java.lang.AssertionError: 
> Expected: is <[KeyValue(A, A:A), KeyValue(B, B:B), KeyValue(C, C:C), 
> KeyValue(D, D:D), KeyValue(E, E:E)]>
>  but: was <[KeyValue(A, A), KeyValue(A, A:A), KeyValue(B, B:B), 
> KeyValue(C, C:C), KeyValue(D, D:D), KeyValue(E, E:E)]>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <[KeyValue(A, A:A), KeyValue(B, B:B), KeyValue(C, C:C), 
> KeyValue(D, D:D), KeyValue(E, E:E)]>
>  but: was <[KeyValue(A, A), KeyValue(A, A:A), KeyValue(B, B:B), 
> KeyValue(C, C:C), KeyValue(D, D:D), KeyValue(E, E:E)]>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
>   at 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest.shouldReduce(KStreamAggregationDedupIntegrationTest.java:138)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.r

[GitHub] kafka pull request #2761: KAFKA-4692: Make testNo thread safe

2017-03-30 Thread original-brownbear
Github user original-brownbear closed the pull request at:

https://github.com/apache/kafka/pull/2761


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-03-30 Thread Jun Rao
Hi, Dong,

Ok, so it seems that in solution (2), if the tool exits successfully, then
we know for sure that all replicas will be in the right log dirs. Solution
(1) doesn't guarantee that. That seems better and we can go with your
current solution then.

Thanks,

Jun

On Fri, Mar 24, 2017 at 4:28 PM, Dong Lin  wrote:

> Hey Jun,
>
> No.. the current approach describe in the KIP (see here
>  3A+Support+replicas+movement+between+log+directories#KIP-
> 113:Supportreplicasmovementbetweenlogdirectories-2)Howtoreas
> signreplicabetweenlogdirectoriesacrossbrokers>)
> also sends ChangeReplicaDirRequest before writing reassignment path in ZK.
> I think we discussing whether ChangeReplicaDirResponse (1) shows success or
> (2) should specify ReplicaNotAvailableException, if replica has not been
> created yet.
>
> Since both solution will send ChangeReplicaDirRequest before writing
> reassignment in ZK, their chance of creating replica in the right directory
> is the same.
>
> To take care of the rarer case that some brokers go down immediately after
> the reassignment tool is run, solution (1) requires reassignment tool to
> repeatedly send DescribeDirRequest and ChangeReplicaDirRequest, while
> solution (1) requires tool to only retry ChangeReplicaDirRequest if the
> response says ReplicaNotAvailableException. It seems that solution (2) is
> cleaner because ChangeReplicaDirRequest won't depend on DescribeDirRequest.
> What do you think?
>
> Thanks,
> Dong
>
>
> On Fri, Mar 24, 2017 at 3:56 PM, Jun Rao  wrote:
>
> > Hi, Dong,
> >
> > We are just comparing whether it's better for the reassignment tool to
> > send ChangeReplicaDirRequest
> > (1) before or (2) after writing the reassignment path in ZK .
> >
> > In the case when all brokers are alive when the reassignment tool is run,
> > (1) guarantees 100% that the new replicas will be in the right log dirs
> and
> > (2) can't.
> >
> > In the rarer case that some brokers go down immediately after the
> > reassignment tool is run, in either approach, there is a chance when the
> > failed broker comes back, it will complete the pending reassignment
> process
> > by putting some replicas in the wrong log dirs.
> >
> > Implementation wise, (1) and (2) seem to be the same. So, it seems to me
> > that (1) is better?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Mar 23, 2017 at 11:54 PM, Dong Lin  wrote:
> >
> > > Hey Jun,
> > >
> > > Thanks much for the response! I agree with you that if multiple
> replicas
> > > are created in the wrong directory, we may waste resource if either
> > > replicaMoveThread number is low or intra.broker.throttled.rate is slow.
> > > Then the question is whether the suggested approach increases the
> chance
> > of
> > > replica being created in the correct log directory.
> > >
> > > I think the answer is no due to the argument provided in the previous
> > > email. Sending ChangeReplicaDirRequest before updating znode has
> > negligible
> > > impact on the chance that the broker processes ChangeReplicaDirRequest
> > > before LeaderAndIsrRequest from controller. If we still worry about the
> > > order they are sent, the reassignment tool can first send
> > > ChangeReplicaDirRequest (so that broker remembers it in memory), create
> > > reassignment znode, and then retry ChangeReplicaDirRequset if the
> > previous
> > > ChangeReplicaDirResponse says the replica has not been created. This
> > should
> > > give us the highest possible chance of creating replica in the correct
> > > directory and avoid the problem of the suggested approach. I have
> updated
> > > "How
> > > to reassign replica between log directories across brokers" in the KIP
> to
> > > explain this procedure.
> > >
> > > To answer your question, the reassignment tool should fail with with
> > proper
> > > error message if user has specified log directory for a replica on an
> > > offline broker.  This is reasonable because reassignment tool can not
> > > guarantee that the replica will be moved to the specified log directory
> > if
> > > the broker is offline. If all brokers are online, the reassignment tool
> > may
> > > hung up to 10 seconds (by default) to retry ChangeReplicaDirRequest if
> > any
> > > replica has not been created already. User can change this timeout
> value
> > > using the newly-added --timeout argument of the reassignment tool. This
> > is
> > > specified in the Public Interface section in the KIP. The reassignment
> > tool
> > > will only block if user uses this new feature of reassigning replica
> to a
> > > specific log directory in the broker. Therefore it seems backward
> > > compatible.
> > >
> > > Does this address the concern?
> > >
> > > Thanks,
> > > Dong
> > >
> > > On Thu, Mar 23, 2017 at 10:06 PM, Jun Rao  wrote:
> > >
> > > > Hi, Dong,
> > > >
> > > > 11.2 I think there are a few reasons why the cross disk movement may
> > not
> > > > catch up if the replicas are created in the wrong lo

Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Bill Bejeck
+1(non-binding)

On Thu, Mar 30, 2017 at 1:30 PM, Eno Thereska 
wrote:

> +1 (non binding)
>
> Thanks
> Eno
> > On 30 Mar 2017, at 18:01, Matthias J. Sax  wrote:
> >
> > +1
> >
> > On 3/30/17 3:46 AM, Damian Guy wrote:
> >> Hi All,
> >>
> >> I'd like to start the voting thread on KIP-134:
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 134%3A+Delay+initial+consumer+group+rebalance
> >>
> >> Thanks,
> >> Damian
> >>
> >
>
>


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Eno Thereska
+1 (non binding)

Thanks
Eno
> On 30 Mar 2017, at 18:01, Matthias J. Sax  wrote:
> 
> +1
> 
> On 3/30/17 3:46 AM, Damian Guy wrote:
>> Hi All,
>> 
>> I'd like to start the voting thread on KIP-134:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-134%3A+Delay+initial+consumer+group+rebalance
>> 
>> Thanks,
>> Damian
>> 
> 



[jira] [Created] (KAFKA-4985) kafka-acls should resolve dns names and accept ip ranges

2017-03-30 Thread Ryan P (JIRA)
Ryan P created KAFKA-4985:
-

 Summary: kafka-acls should resolve dns names and accept ip ranges
 Key: KAFKA-4985
 URL: https://issues.apache.org/jira/browse/KAFKA-4985
 Project: Kafka
  Issue Type: Improvement
  Components: security
Reporter: Ryan P


Per KAFKA-2869 it looks like a conscious decision was made to move away from 
using hostnames for authorization purposes. 

This is fine however IP addresses are terrible inconvenient compared to 
hostname with regard to configuring ACLs. 

I'd like to propose the following two improvements to make managing these ACLs 
easier for end-users. 

1. Allow for simple patterns to be matched 

i.e --allow-host 10.17.81.11[1-9] 

2. Allow for hostnames to be used even if they are resolved on the client side. 
Simple pattern matching on hostnames would be a welcome addition as well

i.e. --allow-host host.name.com

Accepting a comma delimited list of hostnames and ip addresses would also be 
helpful.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-03-30 Thread Matthias J. Sax
+1

On 3/30/17 3:46 AM, Damian Guy wrote:
> Hi All,
> 
> I'd like to start the voting thread on KIP-134:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-134%3A+Delay+initial+consumer+group+rebalance
> 
> Thanks,
> Damian
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-136: Add Listener name and Security Protocol name to SelectorMetrics tags

2017-03-30 Thread Roger Hoover
Edo,

Thanks for the proposal.  This looks great to me.

Cheers,

Roger

On Thu, Mar 30, 2017 at 8:51 AM, Edoardo Comar  wrote:

> Hi all,
>
> We created KIP-136: Add Listener name and Security Protocol name to
> SelectorMetrics tags
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 136%3A+Add+Listener+name+and+Security+Protocol+name+to+
> SelectorMetrics+tags
>
> Please help review the KIP. You feedback is appreciated!
>
> cheers,
> Edo
> --
> Edoardo Comar
> IBM MessageHub
> eco...@uk.ibm.com
> IBM UK Ltd, Hursley Park, SO21 2JN
>
> IBM United Kingdom Limited Registered in England and Wales with number
> 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6
> 3AU
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


[jira] [Commented] (KAFKA-4476) Kafka Streams gets stuck if metadata is missing

2017-03-30 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949374#comment-15949374
 ] 

Matthias J. Sax commented on KAFKA-4476:


That's a different issues, and we have a fix for this already: 
https://github.com/apache/kafka/pull/2757

> Kafka Streams gets stuck if metadata is missing
> ---
>
> Key: KAFKA-4476
> URL: https://issues.apache.org/jira/browse/KAFKA-4476
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Critical
> Fix For: 0.10.2.0
>
>
> When a Kafka Streams application gets started for the first time, it can 
> happen that some topic metadata is missing when 
> {{StreamPartitionAssigner#assign()}} is called on the group leader instance. 
> This can result in an infinite loop within 
> {{StreamPartitionAssigner#assign()}}. This issue was detected by 
> {{ResetIntegrationTest}} that does have a transient timeout failure (c.f. 
> https://issues.apache.org/jira/browse/KAFKA-4058 -- this issue was re-opened 
> multiple times as the problem was expected to be in the test -- however, that 
> is not the case).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[DISCUSS] KIP-137: Enhance TopicCommand --describe to show topics marked for deletion

2017-03-30 Thread Mickael Maison
Hi all,

We created KIP-137: Enhance TopicCommand --describe to show topics
marked for deletion

https://cwiki.apache.org/confluence/display/KAFKA/KIP-137%3A+Enhance+TopicCommand+--describe+to+show+topics+marked+for+deletion

Please help review the KIP. You feedback is appreciated!

Thanks


[GitHub] kafka pull request #2770: MINOR: Increase max.poll time for streams consumer...

2017-03-30 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/2770

MINOR: Increase max.poll time for streams consumers



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka minor-increase-max-poll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2770


commit 528e82631a41c23cc48106a6f4dc321600375387
Author: Eno Thereska 
Date:   2017-03-30T16:38:05Z

Increase max.poll time




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-03-30 Thread lakshminarayanasyamala (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949356#comment-15949356
 ] 

lakshminarayanasyamala commented on KAFKA-4984:
---

Hi Ait haj Slimane,
Can you add some more detailed logs pls. and try restarting the broker once and 
post the results here.

Thanks
LAkshmi

> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: Screenshot from 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-03-30 Thread lakshminarayanasyamala (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949351#comment-15949351
 ] 

lakshminarayanasyamala commented on KAFKA-4984:
---

Hi Ait haj Slimane,

Can you add some more detailed logs pls. and try restarting the broker once and 
post the results here.

Thanks
LAkshmi



> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: Screenshot from 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-135 : Send of null key to a compacted topic should throw non-retriable error back to user

2017-03-30 Thread Mayuresh Gharat
Hi Ismael,

I have updated the KIP. Let me know if everything looks fine then I will
begin voting.

Thanks,

Mayuresh

On Wed, Mar 29, 2017 at 9:06 AM, Mayuresh Gharat  wrote:

> Hi Ismael,
>
> I agree. I will change the compatibility para and start voting.
>
> Thanks,
>
> Mayuresh
>
> On Tue, Mar 28, 2017 at 6:40 PM, Ismael Juma  wrote:
>
>> Hi,
>>
>> I think error messages and error codes serve different purposes. Error
>> messages provide additional information about the error, but users should
>> never have to match on a message to handle an error/exception. For this
>> case, it seems like this is a fatal error so we could get away with just
>> using an error message. Having said that, InvalidKeyError is not too
>> specific and I'm OK with that too.
>>
>> As I said earlier, I do think that we need to change the following
>>
>> "It is recommended that we upgrade the clients before the broker is
>> upgraded, so that the clients would be able to understand the new
>> exception."
>>
>> This is problematic since we want older clients to work with newer
>> brokers.
>> That's why I recommended that we only throw this error if the
>> ProduceRequest is version 3 or higher.
>>
>> Ismael
>>
>> P.S. Note that we already send error messages back for the CreateTopics
>> protocol API (I added that in the previous release).
>>
>> On Tue, Mar 28, 2017 at 7:22 AM, Mayuresh Gharat <
>> gharatmayures...@gmail.com
>> > wrote:
>>
>> > I think, it's OK to do this right now.
>> > The other KIP will have a wider base to cover as it will include other
>> > exceptions as well and will take time.
>> >
>> > Thanks,
>> >
>> > Mayuresh
>> >
>> > On Mon, Mar 27, 2017 at 11:20 PM Dong Lin  wrote:
>> >
>> > > Sorry, I forget that you have mentioned this idea in your previous
>> > reply. I
>> > > guess the question is, do we still need this KIP if we can have custom
>> > > error message specified in the exception via the other KIP?
>> > >
>> > >
>> > > On Mon, Mar 27, 2017 at 11:00 PM, Mayuresh Gharat <
>> > > gharatmayures...@gmail.com> wrote:
>> > >
>> > > > Hi Dong,
>> > > >
>> > > > I do agree with that as I said before the thought did cross my mind
>> > and I
>> > > > am working on getting another KIP ready to have error responses
>> > returned
>> > > > back to the client.
>> > > >
>> > > > In my opinion, it's OK to add a new error code if it justifies the
>> > need.
>> > > As
>> > > > Ismael, mentioned on the jira, we need a specific non retriable
>> error
>> > > code
>> > > > in this case, with specific message, at least until the other KIP is
>> > > ready.
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Mayuresh
>> > > > On Mon, Mar 27, 2017 at 10:55 PM Dong Lin 
>> wrote:
>> > > >
>> > > > > Hey Mayuresh,
>> > > > >
>> > > > > I get that you want to provide a more specific error message to
>> user.
>> > > > Then
>> > > > > would it be more useful to have a KIP that allows custom error
>> > message
>> > > to
>> > > > > be returned to client together with the exception in the response?
>> > For
>> > > > > example, broker can include in the response
>> > > PolicyViolationException("key
>> > > > > can not be null for non-compact topic ${topic}") and client can
>> print
>> > > > this
>> > > > > error message in the log. My concern with current KIP that it is
>> not
>> > > very
>> > > > > scalable to always have a KIP and class for every new error we may
>> > see
>> > > in
>> > > > > the future. The list of error classes, and the errors that need
>> to be
>> > > > > caught and handled by the client code, will increase overtime and
>> > > become
>> > > > > harder to maintain.
>> > > > >
>> > > > > Thanks,
>> > > > > Dong
>> > > > >
>> > > > >
>> > > > > On Mon, Mar 27, 2017 at 7:20 PM, Mayuresh Gharat <
>> > > > > gharatmayures...@gmail.com
>> > > > > > wrote:
>> > > > >
>> > > > > > Hi Dong,
>> > > > > >
>> > > > > > I had thought about this before and wanted to do similar thing.
>> But
>> > > as
>> > > > > was
>> > > > > > pointed out in the jira ticket, we wanted something more
>> specific
>> > > than
>> > > > > > general.
>> > > > > > The main issue is that we do not propagate server side error
>> > messages
>> > > > to
>> > > > > > clients, right now. I am working on a KIP proposal to propose
>> it.
>> > > > > >
>> > > > > > Thanks,
>> > > > > >
>> > > > > > Mayuresh
>> > > > > >
>> > > > > > On Mon, Mar 27, 2017 at 5:55 PM, Dong Lin 
>> > > wrote:
>> > > > > >
>> > > > > > > Hey Mayuresh,
>> > > > > > >
>> > > > > > > Thanks for the patch. I am wondering if it would be better to
>> > add a
>> > > > > more
>> > > > > > > general error, e.g. InvalidMessageException. The benefit is
>> that
>> > we
>> > > > can
>> > > > > > > reuse this for other message level error instead of adding one
>> > > > > exception
>> > > > > > > class for each possible exception in the future. This is
>> similar
>> > to
>> > > > the
>> > > > > > use
>> > > > > > > of InvalidRequestException. For example, ListOffsetResponse
>> may
>> > > > return
>> > > > > > 

[DISCUSS] KIP-136: Add Listener name and Security Protocol name to SelectorMetrics tags

2017-03-30 Thread Edoardo Comar
Hi all,

We created KIP-136: Add Listener name and Security Protocol name to 
SelectorMetrics tags

https://cwiki.apache.org/confluence/display/KAFKA/KIP-136%3A+Add+Listener+name+and+Security+Protocol+name+to+SelectorMetrics+tags

Please help review the KIP. You feedback is appreciated!

cheers,
Edo
--
Edoardo Comar
IBM MessageHub
eco...@uk.ibm.com
IBM UK Ltd, Hursley Park, SO21 2JN

IBM United Kingdom Limited Registered in England and Wales with number 
741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 
3AU
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


[jira] [Updated] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-03-30 Thread Ait haj Slimane (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ait haj Slimane updated KAFKA-4984:
---
Priority: Critical  (was: Major)

> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: Screenshot from 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-03-30 Thread Ait haj Slimane (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ait haj Slimane updated KAFKA-4984:
---
Description: 
I have a problem while trying to produce or consume on kerberos enabled cluster.
I launched a single broker and a console producer,
using the SASL authentication between producer and broker.
When i run the producer ,I got the result attached below

Any advice on what can cause the problem.

Thanks!

configuration used:

server.properties:
listeners=SASL_PLAINTEXT://kafka.example.com:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanism=GSSAPI
sasl.kerberos.service.name=kafka

producer.properties
bootstrap.servers=kafka.example.com:9092
sasl.kerberos.service.name=kafka
security.protocol=SASL_PLAINTEXT

kafka_client_jaas.conf

KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
principal="kafkaclient/kafka.example@example.com";
};

  was:
I have a problem while trying to produce or consume on kerberos enabled cluster.
I launched a single broker and a console producer,
using the SASL_PLAIN authentication between producer and broker.
When i run the producer ,I got the result attached below

Any advice on what can cause the problem.

Thanks!

configuration used:

server.properties:
listeners=SASL_PLAINTEXT://kafka.example.com:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanism=GSSAPI
sasl.kerberos.service.name=kafka

producer.properties
bootstrap.servers=kafka.example.com:9092
sasl.kerberos.service.name=kafka
security.protocol=SASL_PLAINTEXT

kafka_client_jaas.conf

KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
principal="kafkaclient/kafka.example@example.com";
};


> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
> Attachments: Screenshot from 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-03-30 Thread Ait haj Slimane (JIRA)
Ait haj Slimane created KAFKA-4984:
--

 Summary: Unable to produce or consume when enabling authentication 
SASL/Kerberos
 Key: KAFKA-4984
 URL: https://issues.apache.org/jira/browse/KAFKA-4984
 Project: Kafka
  Issue Type: Bug
 Environment: Ubuntu 16.04LTS running in VirtualBox
Reporter: Ait haj Slimane
 Attachments: Screenshot from 2017-03-30 15-36-30.png

I have a problem while trying to produce or consume on kerberos enabled cluster.
I launched a single broker and a console producer,
using the SASL_PLAIN authentication between producer and broker.
When i run the producer ,I got the result attached below

Any advice on what can cause the problem.

Thanks!

configuration used:

server.properties:
listeners=SASL_PLAINTEXT://kafka.example.com:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanism=GSSAPI
sasl.kerberos.service.name=kafka

producer.properties
bootstrap.servers=kafka.example.com:9092
sasl.kerberos.service.name=kafka
security.protocol=SASL_PLAINTEXT

kafka_client_jaas.conf

KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
principal="kafkaclient/kafka.example@example.com";
};



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4981) Add connection-accept-rate and connection-prepare-rate metrics

2017-03-30 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949229#comment-15949229
 ] 

Edoardo Comar commented on KAFKA-4981:
--

Hi [~ijuma] for the 3rd time today :-) ... does this need a kip too :-) :-) 

> Add connection-accept-rate and connection-prepare-rate  metrics
> ---
>
> Key: KAFKA-4981
> URL: https://issues.apache.org/jira/browse/KAFKA-4981
> Project: Kafka
>  Issue Type: Improvement
>  Components: network
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> The current set of socket-server-metrics include 
> connection-close-rate and connection-creation-rate.
> On the server-side we'd find useful to have rates for 
> connections accepted and connections 'prepared' (when the channel is ready) 
> to see how many clients do not go through handshake or authentication



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4981) Add connection-accept-rate and connection-prepare-rate metrics

2017-03-30 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949223#comment-15949223
 ] 

Edoardo Comar commented on KAFKA-4981:
--

Thanks  I know the block is skipped for PLAINTEXT.

Actually we do not care that this rate is always 0 for PLAINTEXT - calling it 
authenticated would work for us.

We wanted from an ops perspective to check how many requests go wasted either 
because of failed TLS handshakes or failed authentications.

In fact this metric would be more useful if we add tags for the listener the 
network thread belongs to 

see https://issues.apache.org/jira/browse/KAFKA-4982



> Add connection-accept-rate and connection-prepare-rate  metrics
> ---
>
> Key: KAFKA-4981
> URL: https://issues.apache.org/jira/browse/KAFKA-4981
> Project: Kafka
>  Issue Type: Improvement
>  Components: network
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> The current set of socket-server-metrics include 
> connection-close-rate and connection-creation-rate.
> On the server-side we'd find useful to have rates for 
> connections accepted and connections 'prepared' (when the channel is ready) 
> to see how many clients do not go through handshake or authentication



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4981) Add connection-accept-rate and connection-prepare-rate metrics

2017-03-30 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949221#comment-15949221
 ] 

Edoardo Comar commented on KAFKA-4981:
--

as per comments on the pull request, the 'prepared' metric may be better called 
'authenticated' and will have a 0 flat rate for PLAINTEXT listeners.

[~rsivaram] wrote:
Prepared doesn't convey much meaning in terms of an externally visible metric. 
I imagine you chose it rather than authenticated since you intended it to work 
for PLAINTEXT. But PLAINTEXT doesn't go through this if-block since 
channel.ready() returns true.


> Add connection-accept-rate and connection-prepare-rate  metrics
> ---
>
> Key: KAFKA-4981
> URL: https://issues.apache.org/jira/browse/KAFKA-4981
> Project: Kafka
>  Issue Type: Improvement
>  Components: network
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> The current set of socket-server-metrics include 
> connection-close-rate and connection-creation-rate.
> On the server-side we'd find useful to have rates for 
> connections accepted and connections 'prepared' (when the channel is ready) 
> to see how many clients do not go through handshake or authentication



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4983) Test failure: kafka.api.ConsumerBounceTest.testSubscribeWhenTopicUnavailable

2017-03-30 Thread Magnus Edenhill (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Magnus Edenhill updated KAFKA-4983:
---
Description: 
The PR builder encountered this test failure:
{{kafka.api.ConsumerBounceTest.testSubscribeWhenTopicUnavailable}}

https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/2532/testReport/junit/kafka.api/ConsumerBounceTest/testSubscribeWhenTopicUnavailable/

{noformat}
[2017-03-30 13:42:40,875] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-03-30 13:42:40,884] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-03-30 13:42:41,221] FATAL [Kafka Server 1], Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:118)
kafka.common.KafkaException: Socket server failed to bind to localhost:42198: 
Address already in use.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:330)
at kafka.network.Acceptor.(SocketServer.scala:255)
at kafka.network.SocketServer.$anonfun$startup$1(SocketServer.scala:99)
at 
kafka.network.SocketServer.$anonfun$startup$1$adapted(SocketServer.scala:90)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.network.SocketServer.startup(SocketServer.scala:90)
at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:122)
at 
kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:91)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1406)
at scala.collection.IterableLike.foreach(IterableLike.scala:71)
at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike.map(TraversableLike.scala:234)
at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
at 
kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:65)
at kafka.api.ConsumerBounceTest.setUp(ConsumerBounceTest.scala:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.Gener

[jira] [Created] (KAFKA-4983) Test failure: kafka.api.ConsumerBounceTest.testSubscribeWhenTopicUnavailable

2017-03-30 Thread Magnus Edenhill (JIRA)
Magnus Edenhill created KAFKA-4983:
--

 Summary: Test failure: 
kafka.api.ConsumerBounceTest.testSubscribeWhenTopicUnavailable
 Key: KAFKA-4983
 URL: https://issues.apache.org/jira/browse/KAFKA-4983
 Project: Kafka
  Issue Type: Test
Reporter: Magnus Edenhill


The PR builder encountered this test failure:
{{kafka.api.ConsumerBounceTest.testSubscribeWhenTopicUnavailable}}

https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/2532/testReport/junit/kafka.api/ConsumerBounceTest/testSubscribeWhenTopicUnavailable/

{{noformat}}
[2017-03-30 13:42:40,875] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-03-30 13:42:40,884] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-03-30 13:42:41,221] FATAL [Kafka Server 1], Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:118)
kafka.common.KafkaException: Socket server failed to bind to localhost:42198: 
Address already in use.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:330)
at kafka.network.Acceptor.(SocketServer.scala:255)
at kafka.network.SocketServer.$anonfun$startup$1(SocketServer.scala:99)
at 
kafka.network.SocketServer.$anonfun$startup$1$adapted(SocketServer.scala:90)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.network.SocketServer.startup(SocketServer.scala:90)
at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:122)
at 
kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:91)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1406)
at scala.collection.IterableLike.foreach(IterableLike.scala:71)
at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike.map(TraversableLike.scala:234)
at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:91)
at 
kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:65)
at kafka.api.ConsumerBounceTest.setUp(ConsumerBounceTest.scala:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.ja

[jira] [Commented] (KAFKA-4476) Kafka Streams gets stuck if metadata is missing

2017-03-30 Thread Magnus Edenhill (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949128#comment-15949128
 ] 

Magnus Edenhill commented on KAFKA-4476:


Directed here from KAFKA-4482.

Happened again on trunk PR:
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/2536/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic/

> Kafka Streams gets stuck if metadata is missing
> ---
>
> Key: KAFKA-4476
> URL: https://issues.apache.org/jira/browse/KAFKA-4476
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Critical
> Fix For: 0.10.2.0
>
>
> When a Kafka Streams application gets started for the first time, it can 
> happen that some topic metadata is missing when 
> {{StreamPartitionAssigner#assign()}} is called on the group leader instance. 
> This can result in an infinite loop within 
> {{StreamPartitionAssigner#assign()}}. This issue was detected by 
> {{ResetIntegrationTest}} that does have a transient timeout failure (c.f. 
> https://issues.apache.org/jira/browse/KAFKA-4058 -- this issue was re-opened 
> multiple times as the problem was expected to be in the test -- however, that 
> is not the case).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4937) Batch resetting offsets in Streams' StoreChangelogReader

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949081#comment-15949081
 ] 

ASF GitHub Bot commented on KAFKA-4937:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2769

KAFKA-4937: Batch resetting offsets in Streams' StoreChangelogReader

change `consumer.position` so that it always updates any partitions that 
need an update. Keep track of partitions that `seekToBeginning` in 
`StoreChangeLogReader` and do the `consumer.position` call after all 
`seekToBeginning` calls.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4937

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2769.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2769


commit 9c4306040a1b6802de9dad84351477f4f3145c74
Author: Damian Guy 
Date:   2017-03-30T13:38:34Z

batch fetch of consumer positions




> Batch resetting offsets in Streams' StoreChangelogReader
> 
>
> Key: KAFKA-4937
> URL: https://issues.apache.org/jira/browse/KAFKA-4937
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: newbie++, performance
>
> Currently in {{StoreChangelogReader}} we are calling {{consumer.position()}} 
> when logging as well as setting starting offset right after 
> {{seekingToBeginning}}, which will incur a blocking round trip with offset 
> request. We should consider batching those in a single round trip for all 
> partitions that needs to seek to beginning (i.e. needs to reset offset).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2769: KAFKA-4937: Batch resetting offsets in Streams' St...

2017-03-30 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2769

KAFKA-4937: Batch resetting offsets in Streams' StoreChangelogReader

change `consumer.position` so that it always updates any partitions that 
need an update. Keep track of partitions that `seekToBeginning` in 
`StoreChangeLogReader` and do the `consumer.position` call after all 
`seekToBeginning` calls.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4937

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2769.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2769


commit 9c4306040a1b6802de9dad84351477f4f3145c74
Author: Damian Guy 
Date:   2017-03-30T13:38:34Z

batch fetch of consumer positions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics

2017-03-30 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949079#comment-15949079
 ] 

Ismael Juma commented on KAFKA-4982:


Yes.

> Add listener tag to socket-server-metrics.connection-... metrics  
> --
>
> Key: KAFKA-4982
> URL: https://issues.apache.org/jira/browse/KAFKA-4982
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> Metrics in socket-server-metrics like connection-count connection-close-rate 
> etc are tagged with networkProcessor:
> where the id of a network processor is just a numeric integer.
> If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id 
> just keeps incrementing and when looking at the metrics it is hard to match 
> the metric tag to a listener. 
> You need to know the number of network threads and the order in which the 
> listeners are declared in the brokers' server.properties.
> We should add a tag showing the listener label, that would also make it much 
> easier to group the metrics in a tool like grafana



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics

2017-03-30 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949036#comment-15949036
 ] 

Edoardo Comar commented on KAFKA-4982:
--

[~ijuma] does this need a kip ?

> Add listener tag to socket-server-metrics.connection-... metrics  
> --
>
> Key: KAFKA-4982
> URL: https://issues.apache.org/jira/browse/KAFKA-4982
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> Metrics in socket-server-metrics like connection-count connection-close-rate 
> etc are tagged with networkProcessor:
> where the id of a network processor is just a numeric integer.
> If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id 
> just keeps incrementing and when looking at the metrics it is hard to match 
> the metric tag to a listener. 
> You need to know the number of network threads and the order in which the 
> listeners are declared in the brokers' server.properties.
> We should add a tag showing the listener label, that would also make it much 
> easier to group the metrics in a tool like grafana



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics

2017-03-30 Thread Edoardo Comar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edoardo Comar reassigned KAFKA-4982:


Assignee: Edoardo Comar

> Add listener tag to socket-server-metrics.connection-... metrics  
> --
>
> Key: KAFKA-4982
> URL: https://issues.apache.org/jira/browse/KAFKA-4982
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> Metrics in socket-server-metrics like connection-count connection-close-rate 
> etc are tagged with networkProcessor:
> where the id of a network processor is just a numeric integer.
> If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id 
> just keeps incrementing and when looking at the metrics it is hard to match 
> the metric tag to a listener. 
> You need to know the number of network threads and the order in which the 
> listeners are declared in the brokers' server.properties.
> We should add a tag showing the listener label, that would also make it much 
> easier to group the metrics in a tool like grafana



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4982) Add listener tag to socket-server-metrics.connection-... metrics

2017-03-30 Thread Edoardo Comar (JIRA)
Edoardo Comar created KAFKA-4982:


 Summary: Add listener tag to socket-server-metrics.connection-... 
metrics  
 Key: KAFKA-4982
 URL: https://issues.apache.org/jira/browse/KAFKA-4982
 Project: Kafka
  Issue Type: Improvement
Reporter: Edoardo Comar


Metrics in socket-server-metrics like connection-count connection-close-rate 
etc are tagged with networkProcessor:
where the id of a network processor is just a numeric integer.

If you have more than one listener (eg PLAINTEXT, SASL_SSL, etc.), the id just 
keeps incrementing and when looking at the metrics it is hard to match the 
metric tag to a listener. 

You need to know the number of network threads and the order in which the 
listeners are declared in the brokers' server.properties.

We should add a tag showing the listener label, that would also make it much 
easier to group the metrics in a tool like grafana








--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4981) Add connection-accept-rate and connection-prepare-rate metrics

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949029#comment-15949029
 ] 

ASF GitHub Bot commented on KAFKA-4981:
---

GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/2768

KAFKA-4981 Add connection-accept-rate and connection-prepare-rate

KAFKA-4981 Add connection-accept-rate and connection-prepare-rate metrics

added metrics per network processor with the rates of connections accepted 
and the rate of connections ready for work

developed with @mimaison

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka KAFKA-4981

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2768.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2768


commit 637b68373d0e79a57058340672a05d34d8f8e388
Author: Edoardo Comar 
Date:   2017-03-30T11:06:12Z

KAFKA-4981 Add connection-accept-rate and connection-prepare-rate
metrics

added metrics per network processor with the rates of connections
accepted and ready for work

developed with @mimaison




> Add connection-accept-rate and connection-prepare-rate  metrics
> ---
>
> Key: KAFKA-4981
> URL: https://issues.apache.org/jira/browse/KAFKA-4981
> Project: Kafka
>  Issue Type: Improvement
>  Components: network
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>
> The current set of socket-server-metrics include 
> connection-close-rate and connection-creation-rate.
> On the server-side we'd find useful to have rates for 
> connections accepted and connections 'prepared' (when the channel is ready) 
> to see how many clients do not go through handshake or authentication



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2768: KAFKA-4981 Add connection-accept-rate and connecti...

2017-03-30 Thread edoardocomar
GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/2768

KAFKA-4981 Add connection-accept-rate and connection-prepare-rate

KAFKA-4981 Add connection-accept-rate and connection-prepare-rate metrics

added metrics per network processor with the rates of connections accepted 
and the rate of connections ready for work

developed with @mimaison

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka KAFKA-4981

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2768.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2768


commit 637b68373d0e79a57058340672a05d34d8f8e388
Author: Edoardo Comar 
Date:   2017-03-30T11:06:12Z

KAFKA-4981 Add connection-accept-rate and connection-prepare-rate
metrics

added metrics per network processor with the rates of connections
accepted and ready for work

developed with @mimaison




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2767: Vagrant provisioning fixes

2017-03-30 Thread edenhill
GitHub user edenhill opened a pull request:

https://github.com/apache/kafka/pull/2767

Vagrant provisioning fixes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edenhill/kafka harden_provision

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2767.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2767


commit b6513183b1daf852e7e39e1676ff3e7fbff45795
Author: Magnus Edenhill 
Date:   2017-03-30T12:57:30Z

MINOR: Silence JDK installer's progress bar

commit 524a70d23a895d7fe71bef52068704757724dfe3
Author: Magnus Edenhill 
Date:   2017-03-30T12:58:48Z

MINOR: Verify correct provisioning of /vagrant mount

This is intended to catch vagrant instance provisioning errors much earlier.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2595: MINOR: Fix typos in javadoc and code comments

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2595


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2620: MINOR: Doc change related to ZK sasl configs

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2620


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4902) Utils#delete should correctly handle I/O errors and symlinks

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15948982#comment-15948982
 ] 

ASF GitHub Bot commented on KAFKA-4902:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2691


> Utils#delete should correctly handle I/O errors and symlinks
> 
>
> Key: KAFKA-4902
> URL: https://issues.apache.org/jira/browse/KAFKA-4902
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>
> Currently, Utils#delete silently ignores I/O errors.  It also will not 
> properly handle symlinks.  It could get into an infinite loop when symlinks 
> are present.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2691: KAFKA-4902: Utils#delete should correctly handle I...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2691


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >