[GitHub] kafka pull request #3990: simple change to KafkaHealth

2017-09-28 Thread prasincs
GitHub user prasincs opened a pull request:

https://github.com/apache/kafka/pull/3990

simple change to KafkaHealth



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/prasincs/kafka KAFKA-5473

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3990.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3990


commit 09ebb532f883f175e0e581d2cfa0550183b93d7b
Author: Prasanna Gautam 
Date:   2017-09-29T06:24:58Z

simple change to KafkaHealth




---


Re: Request to add to Contributors list

2017-09-28 Thread Jun Rao
Hi, Siva,

Thanks for your interest. Added you to the contributor list.

Jun

On Thu, Sep 28, 2017 at 5:52 PM, Siva Santhalingam <
siva.santhalin...@gmail.com> wrote:

> Hi There,
>
> I'm a beginner to the Apache Kafka project. Could you please add me to the
> contributors list.
>
> My Account credentials:
>
> Username: sssanthalingam
> Email: siva.santhalin...@gmail.com
> Full Name: siva santhalingam
>
> Thanks,
> Siva
>


Request to add to Contributors list

2017-09-28 Thread Siva Santhalingam
Hi There,

I'm a beginner to the Apache Kafka project. Could you please add me to the
contributors list.

My Account credentials:

Username: sssanthalingam
Email: siva.santhalin...@gmail.com
Full Name: siva santhalingam

Thanks,
Siva


Issue using Kafka ZkUtils

2017-09-28 Thread ArunaSai . Kannamareddy
Hello Team,

I have the below issue with Kafka ZkUtils API. When I am trying to debug 
intelliJ points me to the ZkUtils class but when
I tried to use it, the methods listed under ZkUtils object only pop up in the 
auto-suggestions.

import kafka.utils.{TestUtils, ZKStringSerializer, ZkUtils}
import org.I0Itec.zkclient.{ZkClient, ZkConnection}
import kafka.utils.ZkUtils._

var zkUtils: ZkUtils = _
//val zkConnection= new ZkConnection(zkConn)
//zkClient = new ZkClient(zkConn,5000,5000,ZKStringSerializer)
//val zkUtils = new ZkUtils(zkClient, zkConnection, false)
val (zkClient, zkConnection) = ZkUtils.createZkClientAndConnection(
  zkConn, sessionTimeoutMs, connectionTimeoutMs)
zkUtils = new ZkUtils(zkClient, zkConnection, false)

I am trying to create a ZkUtils object but facing issues. My Gradle build is 
successful but when I try to run the test file, face the below issues

Error:(27, 16) not found: type ZkUtils
  var zkUtils: ZkUtils = _
Error:(44, 44) value createZkClientAndConnection is not a member of object 
kafka.utils.ZkUtils
val (zkClient, zkConnection) = ZkUtils.createZkClientAndConnection(
Error:(46, 20) not found: type ZkUtils
 zkUtils = new ZkUtils(zkClient, zkConnection, false)

I have the following dependencies in my gradle build file as well.

  compile group: 'org.apache.kafka', name: 'kafka_2.11', version: '0.10.2.1', 
classifier:'test'
compile group: 'org.apache.kafka', name: 'kafka_2.11', version: '0.10.2.1'
testCompile group: 'org.apache.kafka', name: 'kafka_2.11', version: 
'0.10.2.1', classifier:'test'
testCompile group: 'org.apache.kafka', name: 'kafka_2.11', version: 
'0.10.2.1'

Can anyone tell me what’s going wrong here. Thanks for your help.

Thanks,
Aruna.


Re: [VOTE] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-09-28 Thread Jeff Widman
+1 (non-binding)

On Sep 28, 2017 12:13 PM, "Vahid S Hashemian" 
wrote:

> I'm bumping this up as it's awaiting one more binding +1, but I'd like to
> also mention a recent change to the KIP.
>
> Since the current DescribeGroup response protocol does not include
> member-specific information such as preferred assignment strategies, or
> topic subscriptions, I've removed the corresponding ASSIGNMENT-STRATEGY
> and SUBSCRIPTION columns from --members and --members --verbose options,
> respectively. These columns will be added back once KIP-181 (that aims to
> enhance DescribeGroup response) is in place. I hope this small
> modification is reasonable. If needed, we can continue the discussion on
> the discussion thread.
>
> And I'm not sure if this change requires a re-vote.
>
> Thanks.
> --Vahid
>
>
>
> From:   "Vahid S Hashemian" 
> To: dev 
> Date:   07/27/2017 02:04 PM
> Subject:[VOTE] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hi all,
>
> Thanks to everyone who participated in the discussion on KIP-175, and
> provided feedback.
> The KIP can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
>
> .
> I believe the concerns have been addressed in the recent version of the
> KIP; so I'd like to start a vote.
>
> Thanks.
> --Vahid
>
>
>
>
>
>


[jira] [Created] (KAFKA-5993) Kafka AdminClient does not support standard security settings

2017-09-28 Thread Stephane Maarek (JIRA)
Stephane Maarek created KAFKA-5993:
--

 Summary: Kafka AdminClient does not support standard security 
settings
 Key: KAFKA-5993
 URL: https://issues.apache.org/jira/browse/KAFKA-5993
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.1
Reporter: Stephane Maarek


Kafka Admin Client does not support basic security configurations, such as 
"sasl.jaas.config".
Therefore it makes it impossible to use against a secure cluster

```
14:12:12.948 [main] WARN  org.apache.kafka.clients.admin.AdminClientConfig - 
The configuration 'sasl.jaas.config' was supplied but isn't a known config.
```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5992) Better Java Documentation for AdminClient Exceptions

2017-09-28 Thread Stephane Maarek (JIRA)
Stephane Maarek created KAFKA-5992:
--

 Summary: Better Java Documentation for AdminClient Exceptions
 Key: KAFKA-5992
 URL: https://issues.apache.org/jira/browse/KAFKA-5992
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.1
Reporter: Stephane Maarek


When invoking a describeTopics operation on a topic that does not exist, we get 
an InvalidTopicException as a RuntimeException.

I believe this should be documented, and the API maybe changed:

For example changing:
{code:java}
public DescribeTopicsResult describeTopics(Collection topicNames) {
{code}

To:
{code:java}
public DescribeTopicsResult describeTopics(Collection topicNames) 
throws InvalidTopicException 
{code}

Additionally, in case multiple topics don't exist, only the first one will 
throw an error. This is really not scalable. 

Maybe the DescribeTopicsResult could have a Boolean "topicExists" ? 
Up for discussion





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk9 #72

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5957; Prevent second deallocate if response for aborted batch

--
[...truncated 1.47 MB...]
kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED

kafka.network.SocketServerTest > processDisconnectedException PASSED

kafka.network.SocketServerTest > sendCancelledKeyException STARTED

kafka.network.SocketServerTest > sendCancelledKeyException PASSED

kafka.network.SocketServerTest > processCompletedReceiveException STARTED

kafka.network.SocketServerTest > processCompletedReceiveException PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > pollException STARTED

kafka.network.SocketServerTest > pollException PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefau

Build failed in Jenkins: kafka-trunk-jdk7 #2829

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5957; Prevent second deallocate if response for aborted batch

--
[...truncated 426.55 KB...]

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogClean

Build failed in Jenkins: kafka-trunk-jdk8 #2084

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5957; Prevent second deallocate if response for aborted batch

--
[...truncated 553.28 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.in

[GitHub] kafka pull request #3942: KAFKA-5957: Prevent second deallocate if response ...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3942


---


[GitHub] kafka pull request #3989: KAFKA-5746; Fix conversion count computed in `down...

2017-09-28 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3989

KAFKA-5746; Fix conversion count computed in `downConvert`

It should be the number of records instead of the
number of batches.

A few additional clean-ups: minor renames,
removal of unused code, test fixes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-5746-health-metrics-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3989.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3989


commit a85feb150a4aea8d879535c27ddb857278790e63
Author: Ismael Juma 
Date:   2017-09-29T01:40:29Z

KAFKA-5746; Fix downConvert's conversion count

It should be the number of records instead of the
number of batches.

A few additional clean-ups: minor renames,
removal of unused code, test fixes.




---


[jira] [Resolved] (KAFKA-5989) disableLogging() causes an initialization loop

2017-09-28 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5989.

Resolution: Duplicate

\cc [~damianguy] It this is not a duplicate, please reopen.

> disableLogging() causes an initialization loop
> --
>
> Key: KAFKA-5989
> URL: https://issues.apache.org/jira/browse/KAFKA-5989
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Tuan Nguyen
> Attachments: App.java
>
>
> Using {{disableLogging()}} for either of the built-in state store types 
> causes an initialization loop in the StreamThread.
> Case A - this works just fine:
> {code}
>   final StateStoreSupplier testStore = Stores.create(topic)
>   .withStringKeys()
>   .withStringValues()
>   .inMemory()
> //.disableLogging() 
>   .maxEntries(10)
>   .build();
> {code}
> Case B - this does not:
> {code}
>   final StateStoreSupplier testStore = Stores.create(topic)
>   .withStringKeys()
>   .withStringValues()
>   .inMemory()
>   .disableLogging() 
>   .maxEntries(10)
>   .build();
> {code}
> A brief debugging dive shows that in Case B, 
> {{AssignedTasks.allTasksRunning()}} never returns true, because of a remnant 
> entry in {{AssignedTasks#restoring}} that never gets properly restored.
> See [^App.java] for a working test (requires ZK + Kafka ensemble, and at 
> least one keyed message produced to the "test" topic)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3988: KAFKA-5967 Ineffective check of negative value in ...

2017-09-28 Thread shivsantham
GitHub user shivsantham opened a pull request:

https://github.com/apache/kafka/pull/3988

KAFKA-5967 Ineffective check of negative value in CompositeReadOnlyKe…

package name: org.apache.kafka.streams.state.internals
Minor change to approximateNumEntries() method in 
CompositeReadOnlyKeyValueStore class.

long total = 0;
   for (ReadOnlyKeyValueStore store : stores) {
  total += store.approximateNumEntries();
   }

return total < 0 ? Long.MAX_VALUE : total;

The check for negative value seems to account for wrapping. However, 
wrapping can happen within the for loop. So the check should be performed 
inside the loop.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shivsantham/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3988.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3988


commit ff377759a943c7bfb89a56ad721e7ba1b3b0b24c
Author: siva santhalingam 
Date:   2017-09-28T23:37:47Z

KAFKA-5967 Ineffective check of negative value in 
CompositeReadOnlyKeyValueStore#approximateNumEntries()

long total = 0;
   for (ReadOnlyKeyValueStore store : stores) {
  total += store.approximateNumEntries();
   }

return total < 0 ? Long.MAX_VALUE : total;

The check for negative value seems to account for wrapping. However, 
wrapping can happen within the for loop. So the check should be performed 
inside the loop.




---


[GitHub] kafka pull request #3987: KAFKA-5990: Enable generation of metrics docs for ...

2017-09-28 Thread rhauch
GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/3987

KAFKA-5990: Enable generation of metrics docs for Connect

A new mechanism was added recently to the Metrics framework to make it 
easier to generate the documentation. It uses a registry with a 
MetricsNameTemplate for each metric, and then those templates are used when 
creating the actual metrics. The metrics framework provides utilities that can 
generate the HTML documentation from the registry of templates.

This change moves the recently-added Connect metrics over to use these 
templates and to then generate the metric documentation for Connect.

This PR is based upon #3975 and can be rebased once that has been merged.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-5990

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3987.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3987


commit 23aebcdf7797923865a771e7acac89e7e2572e3d
Author: Randall Hauch 
Date:   2017-09-26T15:28:12Z

KAFKA-5902 Added sink task metrics

commit 9824e329ef599c59ea4e7f60cf46ec907b516d90
Author: Randall Hauch 
Date:   2017-09-28T20:47:39Z

KAFKA-5902 Changed to measuring task processing lag behind consumer

Changed the `sink-record-lag-max` metric to be the maximum lag in terms of 
number of records that the sink task is behind the consumer's position for any 
topic partitions. This is not ideal, since often “lag” is defined to 
represent how far behind the task (or consumer) is relative to the end of the 
topic partition. However, the most recent offset for the topic partition is not 
easy to access in Connect.

commit d38dbde9f72926c68383729f9c80513879913cde
Author: Randall Hauch 
Date:   2017-09-28T19:12:21Z

KAFKA-5990 Enable generation of metrics docs for Connect

A new mechanism was added recently to the Metrics framework to make it 
easier to generate the documentation. It uses a registry with a 
MetricsNameTemplate for each metric, and then those templates are used when 
creating the actual metrics. The metrics framework provides utilities that can 
generate the HTML documentation from the registry of templates.

This change moves the recently-added Connect metrics over to use these 
templates and to then generate the metric documentation for Connect.




---


[jira] [Created] (KAFKA-5991) Change Consumer per partition lag metrics to put topic-partition-id in tags instead of metric name

2017-09-28 Thread Kevin Lu (JIRA)
Kevin Lu created KAFKA-5991:
---

 Summary: Change Consumer per partition lag metrics to put 
topic-partition-id in tags instead of metric name
 Key: KAFKA-5991
 URL: https://issues.apache.org/jira/browse/KAFKA-5991
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.11.0.0, 0.10.2.0, 0.10.1.0, 0.10.0.1, 0.10.0.0
Reporter: Kevin Lu
Priority: Minor


A KIP will be needed for this (?) as this requires a change to the public API 
(metric).

[KIP-92](https://cwiki.apache.org/confluence/display/KAFKA/KIP-92+-+Add+per+partition+lag+metrics+to+KafkaConsumer)
 brought per partition lag metrics to KafkaConsumer, but these metrics put the 
"TOPIC-PARTITION_ID" inside of the metric name itself. These metrics should 
instead utilize the tags and put key="topic-partition" and 
value="TOPIC-PARTITION_ID". 

Per-broker (node) and per-topic metrics utilize tags in this way by putting 
key="node/topic" and value="NODE_ID/TOPIC_NAME".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5990) Add generated documentation for Connect metrics

2017-09-28 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5990:


 Summary: Add generated documentation for Connect metrics
 Key: KAFKA-5990
 URL: https://issues.apache.org/jira/browse/KAFKA-5990
 Project: Kafka
  Issue Type: Sub-task
  Components: documentation, KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 1.0.0


KAFKA-5191 recently added a new {{MetricNameTemplate}} that is used to create 
the {{MetricName}} objects in the producer and consumer, as we as in the 
newly-added generation of metric documentation. The {{Metric.toHtmlTable}} 
method then takes these templates and generates an HTML documentation for the 
metrics.

Change the Connect metrics to use these templates and update the build to 
generate these metrics and include them in the Kafka documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site pull request #87: Update main bullet verbiage on the homepage

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/87


---


Build failed in Jenkins: kafka-trunk-jdk9 #71

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5746; Add new metrics to support health checks (KIP-188)

--
[...truncated 1.64 MB...]
kafka.message.MessageTest > testFieldValues STARTED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte STARTED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality STARTED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageCompressionTest > testCompressSize STARTED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress STARTED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq 
STARTED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially STARTED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent STARTED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator STARTED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConvers

[jira] [Created] (KAFKA-5989) disableLogging() causes an initialization loop

2017-09-28 Thread Tuan Nguyen (JIRA)
Tuan Nguyen created KAFKA-5989:
--

 Summary: disableLogging() causes an initialization loop
 Key: KAFKA-5989
 URL: https://issues.apache.org/jira/browse/KAFKA-5989
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.1
Reporter: Tuan Nguyen
 Attachments: App.java

Using {{disableLogging()}} for either of the built-in state store types causes 
an initialization loop in the StreamThread.

Case A - this works just fine:
{code}
final StateStoreSupplier testStore = Stores.create(topic)
.withStringKeys()
.withStringValues()
.inMemory()
//  .disableLogging() 
.maxEntries(10)
.build();
{code}

Case B - this does not:
{code}
final StateStoreSupplier testStore = Stores.create(topic)
.withStringKeys()
.withStringValues()
.inMemory()
.disableLogging() 
.maxEntries(10)
.build();
{code}

A brief debugging dive shows that in Case B, 
{{AssignedTasks.allTasksRunning()}} never returns true, because of a remnant 
entry in {{AssignedTasks#restoring}} that never gets properly restored.

See [^App.java] for a working test (requires ZK + Kafka ensemble, and at least 
one keyed message produced to the "test" topic)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #2828

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5746; Add new metrics to support health checks (KIP-188)

--
[...truncated 385.58 KB...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFi

Build failed in Jenkins: kafka-trunk-jdk8 #2083

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5746; Add new metrics to support health checks (KIP-188)

--
[...truncated 564.38 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integra

[jira] [Created] (KAFKA-5988) Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE

2017-09-28 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5988:
-

 Summary: Consider removing StreamThread#STREAM_THREAD_ID_SEQUENCE
 Key: KAFKA-5988
 URL: https://issues.apache.org/jira/browse/KAFKA-5988
 Project: Kafka
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


StreamThread#STREAM_THREAD_ID_SEQUENCE is used for naming (numbering) 
StreamThread's .
It is used in create() which is called from a loop in KafkaStreams ctor.

We can remove STREAM_THREAD_ID_SEQUENCE and pass the loop index to create()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3705: KAFKA-5746: Add new metrics to support health chec...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3705


---


[jira] [Resolved] (KAFKA-5552) testTransactionalProducerTopicAuthorizationExceptionInCommit fails

2017-09-28 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta resolved KAFKA-5552.
-
   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.1.0)

> testTransactionalProducerTopicAuthorizationExceptionInCommit fails 
> ---
>
> Key: KAFKA-5552
> URL: https://issues.apache.org/jira/browse/KAFKA-5552
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Eno Thereska
>
> Got a unit test error: 
> https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5877/
> Error Message
> org.apache.kafka.common.KafkaException: Cannot execute transactional method 
> because we are in an error state
> Stacktrace
> org.apache.kafka.common.KafkaException: Cannot execute transactional method 
> because we are in an error state
>   at 
> org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:524)
>   at 
> org.apache.kafka.clients.producer.internals.TransactionManager.beginCommit(TransactionManager.java:190)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.commitTransaction(KafkaProducer.java:583)
>   at 
> kafka.api.AuthorizerIntegrationTest.testTransactionalProducerTopicAuthorizationExceptionInCommit(AuthorizerIntegrationTest.scala:1027)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> or

[GitHub] kafka pull request #3986: KAFKA-5949: Follow-up after latest KIP-161 changes

2017-09-28 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3986

KAFKA-5949: Follow-up after latest KIP-161 changes

 - compare KAFKA-5958

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-5949-exceptions-user-callbacks-KIP-161-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3986.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3986


commit 95ebc1ef168b35fe3c7e1e23201114fbafee12bb
Author: Matthias J. Sax 
Date:   2017-09-28T20:58:45Z

KAFKA-5949: Follow-up after latest KIP-161 changes
 - compare KAFKA-5958




---


Build failed in Jenkins: kafka-trunk-jdk8 #2082

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5888; System test to check ordering of messages with transactions

--
[...truncated 1.37 MB...]

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramSha256 STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramSha256 PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidMechanism STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidMechanism PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testDisabledMechanism STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
t

[GitHub] kafka pull request #3985: KAFKA-5987: Maintain order of metric tags in gener...

2017-09-28 Thread rhauch
GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/3985

KAFKA-5987: Maintain order of metric tags in generated documentation

The `MetricNameTemplate` is changed to used a `LinkedHashSet` to maintain 
the same order of the tags that are passed in. This tag order is then 
maintained when `Metrics.toHtmlTable` generates the MBean names for each of the 
metrics.

The `SenderMetricsRegistry` and `FetcherMetricsRegistry` both contain 
templates used in the producer and consumer, respectively, and these were 
changed to use a `LinkedHashSet` to maintain the order of the tags.

Before this change, the generated HTML documentation might use MBean names 
like the following and order them:

```

kafka.connect:type=sink-task-metrics,connector={connector},partition={partition},task={task},topic={topic}
kafka.connect:type=sink-task-metrics,connector={connector},task={task}
```
However, after this change, the documentation would use the following order:
```
kafka.connect:type=sink-task-metrics,connector={connector},task={task}

kafka.connect:type=sink-task-metrics,connector={connector},task={task},topic={topic},partition={partition}
```

This is more readable as the code that is creating the templates has 
control over the order of the tags.

Note that JMX MBean names use ObjectName that does not maintain order of 
the properties (tags), so this change should have no impact on the actual JMX 
MBean names used in the metrics.

cc @wushujames

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-5987

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3985.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3985


commit e5379c999b03b1a10d3779e21659f4a7808dc53c
Author: Randall Hauch 
Date:   2017-09-28T20:07:34Z

KAFKA-5987 Maintain order of metric tags in generated documentation

The `MetricNameTemplate` is changed to used a `LinkedHashSet` to maintain 
the same order of the tags that are passed in. This tag order is then 
maintained when `Metrics.toHtmlTable` generates the MBean names for each of the 
metrics.

The `SenderMetricsRegistry` and `FetcherMetricsRegistry` both contain 
templates used in the producer and consumer, respectively, and these were 
changed to use a `LinkedHashSet` to maintain the order of the tags.




---


[GitHub] kafka pull request #3969: KAFKA-5888: System test to check ordering of messa...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3969


---


[jira] [Created] (KAFKA-5987) Kafka metrics templates used in document generation should maintain order of tags

2017-09-28 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-5987:


 Summary: Kafka metrics templates used in document generation 
should maintain order of tags
 Key: KAFKA-5987
 URL: https://issues.apache.org/jira/browse/KAFKA-5987
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 1.0.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 1.0.0


KAFKA-5191 recently added a new {{MetricNameTemplate}} that is used to create 
the {{MetricName}} objects in the producer and consumer, as we as in the 
newly-added generation of metric documentation. The {{MetricNameTemplate}} and 
the {{Metric.toHtmlTable}} do not maintain the order of the tags, which means 
the resulting HTML documentation will order the table of MBean attributes based 
upon the lexicographical ordering of the MBeans, each of which uses the 
lexicographical ordering of its tags. This can result in the following order:

{noformat}
kafka.connect:type=sink-task-metrics,connector={connector},partition={partition},task={task},topic={topic}
kafka.connect:type=sink-task-metrics,connector={connector},task={task}
{noformat}

However, if the MBeans maintained the order of the tags then the documentation 
would use the following order:

{noformat}
kafka.connect:type=sink-task-metrics,connector={connector},task={task}
kafka.connect:type=sink-task-metrics,connector={connector},task={task},topic={topic},partition={partition}
{noformat}

This would be more readable, and the code that is creating the templates would 
have control over the order of the tags. 

To maintain order, {{MetricNameTemplate}} should used a {{LinkedHashSet}} for 
the tags, and the {{Metrics.toHtmlTable}} method should also use a 
{{LinkedHashMap}} when building up the tags used in the MBean name.

Note that JMX MBean names use {{ObjectName}} that does not maintain order, so 
this change should have no impact on JMX MBean names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: KIP-167 Updates

2017-09-28 Thread Ted Yu
Looks good.

Please update the discussion thread link.

On Thu, Sep 28, 2017 at 12:01 PM, Bill Bejeck  wrote:

> All,
>
> I have updated KIP-167 to include the bootstrapping status of any
> GlobalKTables defined in the application.
>
> The KIP has been updated:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 167%3A+Add+interface+for+the+state+store+restoration+process
>
> Thanks,
> Bill
>


Re: [VOTE] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-09-28 Thread Vahid S Hashemian
I'm bumping this up as it's awaiting one more binding +1, but I'd like to 
also mention a recent change to the KIP.

Since the current DescribeGroup response protocol does not include 
member-specific information such as preferred assignment strategies, or 
topic subscriptions, I've removed the corresponding ASSIGNMENT-STRATEGY 
and SUBSCRIPTION columns from --members and --members --verbose options, 
respectively. These columns will be added back once KIP-181 (that aims to 
enhance DescribeGroup response) is in place. I hope this small 
modification is reasonable. If needed, we can continue the discussion on 
the discussion thread.

And I'm not sure if this change requires a re-vote.

Thanks.
--Vahid



From:   "Vahid S Hashemian" 
To: dev 
Date:   07/27/2017 02:04 PM
Subject:[VOTE] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



Hi all,

Thanks to everyone who participated in the discussion on KIP-175, and 
provided feedback.
The KIP can be found at 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand

.
I believe the concerns have been addressed in the recent version of the 
KIP; so I'd like to start a vote.

Thanks.
--Vahid







KafkaSpout not consuming the first uncommitted offset data from kafka

2017-09-28 Thread senthil kumar
Hi Kafka,

I have a trident topology in storm which consumes data from kafka. Now i am
seeing an issue in KafkaSpout. This is not consuming the very first tthe
first uncommitted offset data from kafka.

My storm version is 1.1.1 and kafka version is 0.11.0.0. I have a topic say
X and partition of the topic is 3.

I have following configuration to consume data using KafkaSpout


KafkaSpoutConfig kafkaConfig =
KafkaSpoutConfig.builder(PropertyUtil.getStringValue(PropertyUtil.KAFKA_BROKERS),
PropertyUtil.getStringValue(PropertyUtil.TOPIC_NAME))
.setProp(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, "4194304")
.setProp(ConsumerConfig.GROUP_ID_CONFIG,PropertyUtil.getStringValue(PropertyUtil.CONSUMER_ID))
.setProp(ConsumerConfig.RECEIVE_BUFFER_CONFIG, "4194304")
.setProp(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true")
.setFirstPollOffsetStrategy(FirstPollOffsetStrategy.UNCOMMITTED_LATEST)
.build();

TopologyBuilder builder = new TopologyBuilder();

builder.setSpout("spout", new KafkaSpout(kafkaConfig),3);

Following are my test cases

1. Processor started with new consumer id. The very first time it starts to
read the data from latest. Fine.
2. Sending some messages to kafka and i am seeing all the messages are
consumed by my trident topology.
3. Stopped my trident topology.
4. Sending some messages to kafka (partition_0). Say example
> msg_1
> msg_2
> msg_3
> msg_4
> msg_5

5. Started the topology. And kafkaspout consumes the data from msg_2. It is
not consuming the msg_1.
6. Stopped  the topology.
7. Sending some messages to kafka to all the partitions (_0, _1, _2). Say
example
Partition_0
> msg_6
> msg_7
> msg_8
Partition_1
> msg_9
> msg_10
> msg_11
Partition_2
> msg_12
> msg_13
> msg_14

8. Started the topology. And kafkaspout consumes following messages
> msg_7
> msg_8
> msg_10
> msg_11
> msg_13
> msg_14

It skipped the earliest uncommitted message in each partition.

Below is the definitions of UNCOMMITTED_LATEST in JavaDoc.

UNCOMMITTED_LATEST means that the kafka spout polls records from the last
committed offset, if any. If no offset has been committed, it behaves as
LATEST.

As per the definitions, it should read from last committed offset. But it
looks like it is reading from uncommitted earliest + 1. I meant the pointer
seems to be wrong.

Please have a look and let me know if anything wrong in my tests.

I am expecting a response from you, even it is not an issue.

Thanks,
Senthil


KIP-167 Updates

2017-09-28 Thread Bill Bejeck
All,

I have updated KIP-167 to include the bootstrapping status of any
GlobalKTables defined in the application.

The KIP has been updated:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-167%3A+Add+interface+for+the+state+store+restoration+process

Thanks,
Bill


Build failed in Jenkins: kafka-trunk-jdk8 #2081

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5901: Added Connect metrics specific to source tasks (KIP-196)

--
[...truncated 362.46 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete FAILED
org.scalatest.junit.JUnitTestFailedError: 1 did not equal 0 
UnderReplicatedPartitionCount not 0: 1
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
at 
org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
at 
kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete(MetricsDuringTopicCreationDeletionTest.scala:123)

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kaf

[jira] [Resolved] (KAFKA-5901) Create Connect metrics for source tasks

2017-09-28 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5901.
--
Resolution: Fixed

Issue resolved by pull request 3959
[https://github.com/apache/kafka/pull/3959]

> Create Connect metrics for source tasks
> ---
>
> Key: KAFKA-5901
> URL: https://issues.apache.org/jira/browse/KAFKA-5901
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> See KAFKA-2376 for parent task and 
> [KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
>  for the details on the metrics. This subtask is to create the "Source Task 
> Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3959: KAFKA-5901: Added Connect metrics specific to sour...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3959


---


Re: [DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-09-28 Thread Tom Bentley
I'm starting to think about KIP-179 again. In order to have more
manageably-scoped KIPs and PRs I think it might be worth factoring-out the
throttling part into a separate KIP. Wdyt?

Keeping the throttling discussion in this thread for the moment...

The throttling behaviour is currently spread across the `(leader|follower).
replication.throttled.replicas` topic config and the
`(leader|follower).replication.throttled.rate`
dynamic broker config. It's not really clear to me exactly what "removing
the throttle" is supposed to mean. I mean we could reset the rate to
Long.MAV_VALUE or we could change the list of replicas to an empty list.
The ReassignPartitionsCommand does both, but there is some small utility in
leaving the rate, but clearing the list, if you've discovered the "right"
rate for your cluster/workload and to want it to be sticky for next time.
Does any one do this in practice?

With regards to throttling, it would be
>> worth thinking about a way where the throttling configs can be
>> automatically removed without the user having to re-run the tool.
>>
>
> Isn't that just a matter of updating the topic configs for
> (leader|follower).replication.throttled.replicas at the same time we
> remove the reassignment znode? That leaves open the question about whether
> to reset the rates at the same time.
>

Thinking some more about my "update the configs at the same time we remove
the reassignment znode" suggestion. The reassignment znode is persistent,
so the reassignment will survive a zookeeper restart. If there was a flag
for the auto-removal of the throttle it would likewise need to be
persistent. Otherwise a ZK restart would remember the reassignment, but
forget about the preference for auto removal of throttles. So, we would use
a persistent znode (a child of the reassignment path, perhaps) to store a
flag for throttle removal.

Thoughts?

Cheers,

Tom


[GitHub] kafka pull request #3984: MINOR: update streams quickstart for KIP-182

2017-09-28 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3984

MINOR: update streams quickstart for KIP-182



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka quickstart-update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3984.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3984


commit 98c3fd602b92c9cee45a1f626e4e8ca63882b9aa
Author: Damian Guy 
Date:   2017-09-28T14:54:02Z

update quickstart




---


Jenkins build is back to normal : kafka-trunk-jdk9 #68

2017-09-28 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk7 #2825

2017-09-28 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3983: KAFKA-5986: Streams State Restoration never comple...

2017-09-28 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3983

KAFKA-5986: Streams State Restoration never completes when logging is 
disabled

When logging is disabled and there are state stores the task never 
transitions from restoring to running. This is because we only ever check if 
the task has state stores and return false on initialization if it does. The 
check should be if we have changelog partitions, i.e., we need to restore.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka restore-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3983.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3983


commit c60856362a9dfe1f8b68d76cbb5a783eef6abfff
Author: Damian Guy 
Date:   2017-09-28T11:52:34Z

fix task initialization when logging disabled




---


[GitHub] kafka pull request #3981: HOTFIX: fix build compilation error

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3981


---


[GitHub] kafka pull request #3982: TRIVIAL: fix connect Flatten error message

2017-09-28 Thread sv3nd
GitHub user sv3nd opened a pull request:

https://github.com/apache/kafka/pull/3982

TRIVIAL: fix connect Flatten error message

In case of an error while flattening a record with schema, the Flatten 
transform was reporting an error about a record without schema, as follows: 

```
org.apache.kafka.connect.errors.DataException: Flatten transformation does 
not support ARRAY for record without schemas (for field ...)
```

The expected behaviour would be an error message specifying "with schemas". 

This looks like a simple copy/paste typo from the schemaless equivalent 
methods, in the same file 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sv3nd/kafka 
svend/fix-flatten-error-log-message

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3982.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3982


commit 8c46e1ec4199878f7e0c1401a768da8501350da8
Author: Svend Vanderveken 
Date:   2017-09-28T11:49:46Z

fix connect Flatten error message

In case of an error while flattening a record with schema, the Flatten 
transform was reporting an error about a record without schema




---


Build failed in Jenkins: kafka-trunk-jdk9 #67

2017-09-28 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] KAFKA-5958; Global stores access state restore listener

[ismael] KAFKA-5960; Follow-up cleanup

[damian.guy] KAFKA-5949; User Callback Exceptions need to be handled properly

[damian.guy] KAFKA-5979; Use single AtomicCounter to generate internal names

[ismael] KAFKA-5976; Improve trace logging in RequestChannel.sendResponse

[ismael] MINOR: Use full package name when classes referenced in documentation

--
[...truncated 1.44 MB...]
K extends Object declared in interface KGroupedStream
:82:
 warning: [deprecation] reduce(Reducer,String) in KGroupedStream has been 
deprecated
public KTable reduce(final Reducer reducer,
^
  where V,K are type-variables:
V extends Object declared in interface KGroupedStream
K extends Object declared in interface KGroupedStream
:383:
 warning: [deprecation] count(SessionWindows,StateStoreSupplier) 
in KGroupedStream has been deprecated
public KTable, Long> count(final SessionWindows sessionWindows,
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:378:
 warning: [deprecation] count(SessionWindows) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows) 
{
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:371:
 warning: [deprecation] count(SessionWindows,String) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows, 
final String queryableStoreName) {
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:284:
 warning: [deprecation] count(Windows,StateStoreSupplier) in 
KGroupedStream has been deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method 
count(Windows,StateStoreSupplier)
K extends Object declared in interface KGroupedStream
:279:
 warning: [deprecation] count(Windows) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows) {
^
  where W,K are type-variables:
W extends Window declared in method count(Windows)
K extends Object declared in interface KGroupedStream
:272:
 warning: [deprecation] count(Windows,String) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method count(Windows,String)
K extends Object declared in interface KGroupedStream
:262:
 warning: [deprecation] count(StateStoreSupplier) in 
KGroupedStream has been deprecated
public KTable count(final StateStoreSupplier 
storeSupplier) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:251:
 warning: [deprecation] count(String) in KGroupedStream has been deprecated
public KTable count(final String queryableStoreName) {
   ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:96:
 warning: [deprecation] StateStoreSupplier in 
org.apache.kafka.streams.processor has been

[jira] [Created] (KAFKA-5986) Streams State Restoration never completes when logging is disabled

2017-09-28 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5986:
-

 Summary: Streams State Restoration never completes when logging is 
disabled
 Key: KAFKA-5986
 URL: https://issues.apache.org/jira/browse/KAFKA-5986
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.1
Reporter: Damian Guy
Assignee: Damian Guy
Priority: Critical
 Fix For: 1.0.0, 0.11.0.2


When logging is disabled on a state store, the store restoration never 
completes. This is likely because there are no changelogs, but more 
investigation is required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3986) completedReceives can contain closed channels

2017-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3986.

   Resolution: Fixed
Fix Version/s: (was: 1.1.0)
   1.0.0

> completedReceives can contain closed channels 
> --
>
> Key: KAFKA-3986
> URL: https://issues.apache.org/jira/browse/KAFKA-3986
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Reporter: Ryan P
> Fix For: 1.0.0
>
>
> I'm not entirely sure why at this point but it is possible to throw a Null 
> Pointer Exception when processingCompletedReceives. This happens when a 
> fairly substantial number of simultaneously initiated connections are 
> initiated with the server. 
> The processor thread does carry on but it may be worth investigating how the 
> channel could be both closed and  completedReceives. 
> The NPE in question is thrown here:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/network/SocketServer.scala#L490
> It can not be consistently reproduced either. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-09-28 Thread Paolo Patierno
Hi,


maybe we want to start without the delete records policy for now waiting for a 
real needs. So I'm removing it from the KIP.

I hope for more comments on this KIP-204 so that we can start a vote on Monday.


Thanks.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Thursday, September 28, 2017 5:56 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi,


I have just updated the KIP-204 description with the new TopicDeletionPolicy 
suggested by the KIP-201.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Tuesday, September 26, 2017 4:57 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi Tom,

as I said in the KIP-201 discussion I'm ok with having a unique 
DeleteTopicPolicy but then it could be useful having more information then just 
the topic name; partitions and offset for messages deletion could be useful for 
a fine grained use cases.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Tom Bentley 
Sent: Tuesday, September 26, 2017 4:32 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi Paolo,

I guess a RecordDeletionPolicy should work at the partition level, whereas
the TopicDeletionPolicy should work at the topic level. But then we run
into a similar situation as described in the motivation for KIP-201, where
the administrator might have to write+configure two policies in order to
express their intended rules. For example, it's no good preventing people
from deleting topics if they can delete all the messages in those topics,
or vice versa.

On that reasoning, perhaps there should be a single policy interface
covering topic deletion and message deletion. Alternatively, the topic
deletion API could also invoke the record deletion policy (before the topic
deletion policy I mean). But the former would be more consistent with
what's proposed in KIP-201.

Wdyt? I can add this to KIP-201 if you want.

Cheers,

Tom





On 26 September 2017 at 17:01, Paolo Patierno  wrote:

> Hi Tom,
>
> I think that we could live with the current authorizer based on delete
> topic (for both, deleting messages and topic as a whole) but then the
> RecordsDeletePolicy could be even more fine grained giving the possibility
> to avoid deleting messages for specific partitions (inside the topic) and
> starting from a specific offset.
>
> I could think on some users solutions where they know exactly what the
> partitions means inside of a specific topic (because they are using a
> custom partitioner on the producer side) so they know what kind of messages
> are inside a partition allowing to delete them but not the other.
>
> In such a policy a user could also check the timestamp related to the
> offset for allowing or not deletion on time base.
>
>
> Wdyt ?
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Tom Bentley 
> Sent: Tuesday, September 26, 2017 2:55 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> Hi Edoardo and Paolo,
>
>
> On 26 September 2017 at 14:10, Paolo Patierno  wrote:
>
> > What could be useful use cases for having a RecordsDeletePolicy ? Records
> > can't be deleted for a topic name ? Starting from a specific offset ?
> >
>
> I can imagine some users wanting to prohibit using this API completely.
> Maybe others divide up the topic namespace according to some scheme, and so
> it would be allowed for some topics, but not for others based on the name.
> Both these could be done using authz, but would be much simpler to express
> using a policy.
>
> Since both deleting messages and deleting topics are authorized using
> delete operation on th

[GitHub] kafka pull request #3875: MINOR: Use full package name when classes referenc...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3875


---


[jira] [Resolved] (KAFKA-5976) RequestChannel.sendReponse records incorrect size for NetworkSend with TRACE logging

2017-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5976.

   Resolution: Fixed
Fix Version/s: 1.0.0

> RequestChannel.sendReponse records incorrect size for NetworkSend with TRACE 
> logging
> 
>
> Key: KAFKA-5976
> URL: https://issues.apache.org/jira/browse/KAFKA-5976
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.1
>Reporter: huxihx
>Assignee: huxihx
> Fix For: 1.0.0
>
>
> In RequestChannel.scala, RequestChannel.sendResponse records incorrect size 
> for `NetworkSend` when trace logging is enabled, as shown below:
> {code:title=RequestChannel.scala|borderStyle=solid}
> def sendResponse(response: RequestChannel.Response) {
> if (isTraceEnabled) {
>   val requestHeader = response.request.header
>   trace(s"Sending ${requestHeader.apiKey} response to client 
> ${requestHeader.clientId} of " + s"${response.responseSend.size} bytes.")
> }
> {code}
> `responseSend` is of `scala.Option` type so it should be 
> `response.responseSend.get.size`. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3981: HOTFIX: fix build compilation error

2017-09-28 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3981

HOTFIX: fix build compilation error



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka fix-build

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3981


commit 2a8289e1c556608d8ca3e2423840609207ad4eef
Author: Damian Guy 
Date:   2017-09-28T10:31:49Z

fix build breakage




---


[GitHub] kafka pull request #3961: KAFKA-5976: RequestChannel.sendResponse should rec...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3961


---


[GitHub] kafka pull request #3979: KAFKA-5979: Use single AtomicCounter to generate i...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3979


---


[GitHub] kafka pull request #3939: KAFKA-5949: User Callback Exceptions need to be ha...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3939


---


[GitHub] kafka pull request #3976: MINOR: Follow-up cleanup of KAFKA-5960

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3976


---


[GitHub] kafka pull request #3973: KAFKA-5958: Global stores access state restore lis...

2017-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3973


---


[jira] [Resolved] (KAFKA-5961) NullPointerException when consumer restore read messages with null key.

2017-09-28 Thread Andres Gomez Ferrer (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Gomez Ferrer resolved KAFKA-5961.

Resolution: Fixed

> NullPointerException when consumer restore read messages with null key.
> ---
>
> Key: KAFKA-5961
> URL: https://issues.apache.org/jira/browse/KAFKA-5961
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Andres Gomez Ferrer
> Fix For: 0.11.0.1, 0.11.0.0
>
>
> If you have a kafka streams that use:
> {code:java}
> stream.table("topicA")
> {code}
> When the application is running if you send a message with a null key, it 
> works fine. Later, if you restart the application when the restore consumer 
> starts to read the topicA from the beginning, it crashes because doesn't 
> filter the null key.
> I know that isn't normal send a null key to a topic that is a table topic, 
> but maybe sometimes can happen .. and I think that kafka streams could 
> protect it self.
> This is the stack trace:
> {code}
> ConsumerCoordinator [ERROR] User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
> my-cep-app_enricher failed on partition assignment
> java.lang.NullPointerException
>   at org.rocksdb.RocksDB.put(RocksDB.java:488)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStore.putInternal(RocksDBStore.java:254)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStore.access$000(RocksDBStore.java:67)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStore$1.restore(RocksDBStore.java:164)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:242)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:201)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.register(AbstractProcessorContext.java:99)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:160)
>   at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
>   at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:131)
>   at 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:63)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:141)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5985) Mention the need to close store iterators

2017-09-28 Thread Stanislav Chizhov (JIRA)
Stanislav Chizhov created KAFKA-5985:


 Summary: Mention the need to close store iterators
 Key: KAFKA-5985
 URL: https://issues.apache.org/jira/browse/KAFKA-5985
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.11.0.0
Reporter: Stanislav Chizhov


Store iterators should be closed in all/most of the cases, but currently it is 
not consistently reflected in the documentation and javadocs. For instance  
https://kafka.apache.org/0110/documentation/streams/developer-guide#streams_developer-guide_interactive-queries_custom-stores
 does not mention the need to close an iterator and provide and example that 
does not do that. 
Some of the fetch methods do mention the need to close an iterator returned 
(e.g. 
https://kafka.apache.org/0110/javadoc/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.html#range(K,%20K)),
 but others do not: 
https://kafka.apache.org/0110/javadoc/org/apache/kafka/streams/state/ReadOnlyWindowStore.html#fetch(K,%20long,%20long)

It makes sense to: 
- update javadoc for all store methods that do return iterators to reflect that 
the iterator returned needs to be closed
- mention it in the documentation and to update related examples.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[DISCUSS] KIP-101: Alter Replication Protocol to use Leader Generation rather than High Watermark for Truncation

2017-09-28 Thread Alex.Chen
Hi All:

  We are using Kafka 0.8.2.2 in our sit enviornment, and meeting an
data lose case when all brokers (2 brokers) going down and restart again. I
am tring to understand the log management and recovery mechanism in kafka
and i found a useful description document: KIP-101 - Alter Replication
Protocol to use Leader Epoch rather than High Watermark for Truncation (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+use+Leader+Epoch+rather+than+High+Watermark+for+Truncation)
.  However, i meet some difficulties to understand the "Scenario 1: High
Watermark Truncation followed by Immediate Leader Election" description in
this article.  In this scenario, leader B has update HW to m2, however,
follower A just got m2, but not update its local HW to m2, and ”the
follower (A) has message m2, but has not yet got confirmation from the
leader (B) that m2 has been committed (the second round of replication,
which lets (A) move forward its high watermark past m2, has yet to happen)“

Is that possible?  Since there is only 2 brokers, and i think, leader B
update HW to m2 only if follower A fetch m2, and also, when follower A
fetch m2, leader B update HW to m2, follower A will get this updated HW(m2)
infomation in the m2's fetchMessage response  (with HighwaterMarkOffset
field), it won't need to get any confirmation on the second round as
article methioned. I am confusing about this part. I think if there is
another broker follower C which fetch fetch m2 later than follower A, that
would lead to follower A waiting for second round of replication to confirm
HW(m2), but there is no broker c in this description

   am i missing some procedure or there is some flaw for this description
about the case?  any explanations are appreciated, thanks all kafka
developers to bring us such a greate production.


Alex.Chen